00:00:00.001 Started by upstream project "autotest-nightly" build number 3914 00:00:00.001 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.002 Started by upstream project "autotest-nightly" build number 3912 00:00:00.002 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3911 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3909 00:00:00.004 originally caused by: 00:00:00.004 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3908 00:00:00.005 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.138 using credential 00000000-0000-0000-0000-000000000002 00:00:00.139 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.200 Fetching changes from the remote Git repository 00:00:00.202 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.256 Using shallow fetch with depth 1 00:00:00.256 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.256 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.302 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.326 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.327 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:07.907 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.921 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.938 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:07.938 > git config core.sparsecheckout # timeout=10 00:00:07.948 > git read-tree -mu HEAD # timeout=10 00:00:07.967 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:07.990 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:07.990 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:08.088 [Pipeline] Start of Pipeline 00:00:08.103 [Pipeline] library 00:00:08.105 Loading library shm_lib@master 00:00:08.105 Library shm_lib@master is cached. Copying from home. 00:00:08.120 [Pipeline] node 00:00:08.133 Running on WFP13 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:08.135 [Pipeline] { 00:00:08.145 [Pipeline] catchError 00:00:08.146 [Pipeline] { 00:00:08.160 [Pipeline] wrap 00:00:08.169 [Pipeline] { 00:00:08.178 [Pipeline] stage 00:00:08.180 [Pipeline] { (Prologue) 00:00:08.382 [Pipeline] sh 00:00:08.663 + logger -p user.info -t JENKINS-CI 00:00:08.682 [Pipeline] echo 00:00:08.684 Node: WFP13 00:00:08.691 [Pipeline] sh 00:00:08.990 [Pipeline] setCustomBuildProperty 00:00:09.005 [Pipeline] echo 00:00:09.006 Cleanup processes 00:00:09.012 [Pipeline] sh 00:00:09.296 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.296 470737 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.310 [Pipeline] sh 00:00:09.593 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:09.593 ++ grep -v 'sudo pgrep' 00:00:09.593 ++ awk '{print $1}' 00:00:09.593 + sudo kill -9 00:00:09.593 + true 00:00:09.606 [Pipeline] cleanWs 00:00:09.614 [WS-CLEANUP] Deleting project workspace... 00:00:09.614 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.620 [WS-CLEANUP] done 00:00:09.623 [Pipeline] setCustomBuildProperty 00:00:09.637 [Pipeline] sh 00:00:09.914 + sudo git config --global --replace-all safe.directory '*' 00:00:10.007 [Pipeline] httpRequest 00:00:10.038 [Pipeline] echo 00:00:10.040 Sorcerer 10.211.164.101 is alive 00:00:10.050 [Pipeline] httpRequest 00:00:10.055 HttpMethod: GET 00:00:10.056 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:10.056 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:10.059 Response Code: HTTP/1.1 200 OK 00:00:10.060 Success: Status code 200 is in the accepted range: 200,404 00:00:10.061 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:11.049 [Pipeline] sh 00:00:11.330 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:11.346 [Pipeline] httpRequest 00:00:11.372 [Pipeline] echo 00:00:11.374 Sorcerer 10.211.164.101 is alive 00:00:11.384 [Pipeline] httpRequest 00:00:11.389 HttpMethod: GET 00:00:11.389 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.390 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:11.393 Response Code: HTTP/1.1 200 OK 00:00:11.394 Success: Status code 200 is in the accepted range: 200,404 00:00:11.394 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:29.764 [Pipeline] sh 00:00:30.048 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:32.593 [Pipeline] sh 00:00:32.874 + git -C spdk log --oneline -n5 00:00:32.874 f7b31b2b9 log: declare g_deprecation_epoch static 00:00:32.874 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:00:32.874 3731556bd lvol: declare g_lvol_if static 00:00:32.874 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:00:32.874 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:00:32.888 [Pipeline] sh 00:00:33.170 + ip --json address 00:00:33.184 [Pipeline] readJSON 00:00:33.207 [Pipeline] echo 00:00:33.209 NIC with Beetle address is already setup (192.168.10.10) 00:00:33.215 [Pipeline] withCredentials 00:00:33.229 Masking supported pattern matches of $beetle_key 00:00:33.231 [Pipeline] { 00:00:33.238 [Pipeline] sh 00:00:33.517 + ssh -i **** -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectionAttempts=5 root@192.168.10.11 'for gpio in {0..10}; do Beetle --SetGpio "$gpio" HIGH; done' 00:00:34.085 Warning: Permanently added '192.168.10.11' (ED25519) to the list of known hosts. 00:00:36.631 [Pipeline] } 00:00:36.658 [Pipeline] // withCredentials 00:00:36.664 [Pipeline] } 00:00:36.682 [Pipeline] // stage 00:00:36.693 [Pipeline] stage 00:00:36.695 [Pipeline] { (Prepare) 00:00:36.714 [Pipeline] writeFile 00:00:36.733 [Pipeline] sh 00:00:37.016 + logger -p user.info -t JENKINS-CI 00:00:37.030 [Pipeline] sh 00:00:37.314 + logger -p user.info -t JENKINS-CI 00:00:37.327 [Pipeline] sh 00:00:37.611 + cat autorun-spdk.conf 00:00:37.611 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.611 SPDK_TEST_FUZZER_SHORT=1 00:00:37.611 SPDK_TEST_FUZZER=1 00:00:37.611 SPDK_RUN_ASAN=1 00:00:37.611 SPDK_RUN_UBSAN=1 00:00:37.617 RUN_NIGHTLY=1 00:00:37.622 [Pipeline] readFile 00:00:37.649 [Pipeline] withEnv 00:00:37.651 [Pipeline] { 00:00:37.666 [Pipeline] sh 00:00:37.954 + set -ex 00:00:37.954 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:37.954 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:37.954 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.954 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:37.954 ++ SPDK_TEST_FUZZER=1 00:00:37.954 ++ SPDK_RUN_ASAN=1 00:00:37.954 ++ SPDK_RUN_UBSAN=1 00:00:37.954 ++ RUN_NIGHTLY=1 00:00:37.954 + case $SPDK_TEST_NVMF_NICS in 00:00:37.954 + DRIVERS= 00:00:37.954 + [[ -n '' ]] 00:00:37.954 + exit 0 00:00:37.964 [Pipeline] } 00:00:37.982 [Pipeline] // withEnv 00:00:37.989 [Pipeline] } 00:00:38.006 [Pipeline] // stage 00:00:38.017 [Pipeline] catchError 00:00:38.019 [Pipeline] { 00:00:38.035 [Pipeline] timeout 00:00:38.035 Timeout set to expire in 30 min 00:00:38.037 [Pipeline] { 00:00:38.055 [Pipeline] stage 00:00:38.058 [Pipeline] { (Tests) 00:00:38.074 [Pipeline] sh 00:00:38.360 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:38.360 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:38.360 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:38.360 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:38.360 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:38.360 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:38.360 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:38.360 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:38.360 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:38.360 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:38.360 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:38.360 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:38.360 + source /etc/os-release 00:00:38.360 ++ NAME='Fedora Linux' 00:00:38.360 ++ VERSION='38 (Cloud Edition)' 00:00:38.360 ++ ID=fedora 00:00:38.360 ++ VERSION_ID=38 00:00:38.360 ++ VERSION_CODENAME= 00:00:38.360 ++ PLATFORM_ID=platform:f38 00:00:38.360 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:38.360 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.360 ++ LOGO=fedora-logo-icon 00:00:38.360 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:38.360 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.360 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:38.360 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.360 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.360 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.360 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:38.360 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.360 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:38.360 ++ SUPPORT_END=2024-05-14 00:00:38.360 ++ VARIANT='Cloud Edition' 00:00:38.360 ++ VARIANT_ID=cloud 00:00:38.360 + uname -a 00:00:38.360 Linux spdk-wfp-13 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:38.360 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:41.648 Hugepages 00:00:41.648 node hugesize free / total 00:00:41.648 node0 1048576kB 0 / 0 00:00:41.648 node0 2048kB 0 / 0 00:00:41.648 node1 1048576kB 0 / 0 00:00:41.648 node1 2048kB 0 / 0 00:00:41.648 00:00:41.648 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.648 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:41.648 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:41.648 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:00:41.648 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:41.648 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:00:41.907 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:00:41.907 + rm -f /tmp/spdk-ld-path 00:00:41.908 + source autorun-spdk.conf 00:00:41.908 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.908 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:41.908 ++ SPDK_TEST_FUZZER=1 00:00:41.908 ++ SPDK_RUN_ASAN=1 00:00:41.908 ++ SPDK_RUN_UBSAN=1 00:00:41.908 ++ RUN_NIGHTLY=1 00:00:41.908 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.908 + [[ -n '' ]] 00:00:41.908 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:41.908 + for M in /var/spdk/build-*-manifest.txt 00:00:41.908 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.908 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:41.908 + for M in /var/spdk/build-*-manifest.txt 00:00:41.908 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.908 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:41.908 ++ uname 00:00:41.908 + [[ Linux == \L\i\n\u\x ]] 00:00:41.908 + sudo dmesg -T 00:00:41.908 + sudo dmesg --clear 00:00:41.908 + dmesg_pid=471914 00:00:41.908 + [[ Fedora Linux == FreeBSD ]] 00:00:41.908 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.908 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:41.908 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:41.908 + [[ -x /usr/src/fio-static/fio ]] 00:00:41.908 + sudo dmesg -Tw 00:00:41.908 + export FIO_BIN=/usr/src/fio-static/fio 00:00:41.908 + FIO_BIN=/usr/src/fio-static/fio 00:00:41.908 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:41.908 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:41.908 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:41.908 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.908 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:41.908 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:41.908 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.908 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:41.908 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:41.908 Test configuration: 00:00:41.908 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.908 SPDK_TEST_FUZZER_SHORT=1 00:00:41.908 SPDK_TEST_FUZZER=1 00:00:41.908 SPDK_RUN_ASAN=1 00:00:41.908 SPDK_RUN_UBSAN=1 00:00:41.908 RUN_NIGHTLY=1 08:13:54 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:41.908 08:13:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:41.908 08:13:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:41.908 08:13:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:41.908 08:13:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.908 08:13:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.908 08:13:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.908 08:13:54 -- paths/export.sh@5 -- $ export PATH 00:00:41.908 08:13:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:41.908 08:13:54 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:41.908 08:13:54 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:41.908 08:13:54 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721715234.XXXXXX 00:00:41.908 08:13:54 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721715234.nRzcm1 00:00:41.908 08:13:54 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:41.908 08:13:54 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:41.908 08:13:54 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:41.908 08:13:54 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:41.908 08:13:54 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.167 08:13:54 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:42.167 08:13:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:42.167 08:13:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.167 08:13:54 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.167 08:13:54 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:42.167 08:13:54 -- pm/common@17 -- $ local monitor 00:00:42.167 08:13:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.167 08:13:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.167 08:13:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.167 08:13:54 -- pm/common@21 -- $ date +%s 00:00:42.167 08:13:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.167 08:13:54 -- pm/common@21 -- $ date +%s 00:00:42.167 08:13:54 -- pm/common@25 -- $ sleep 1 00:00:42.167 08:13:54 -- pm/common@21 -- $ date +%s 00:00:42.167 08:13:54 -- pm/common@21 -- $ date +%s 00:00:42.167 08:13:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715234 00:00:42.167 08:13:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715234 00:00:42.167 08:13:54 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715234 00:00:42.167 08:13:54 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721715234 00:00:42.167 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715234_collect-vmstat.pm.log 00:00:42.167 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715234_collect-cpu-load.pm.log 00:00:42.167 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715234_collect-cpu-temp.pm.log 00:00:42.167 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721715234_collect-bmc-pm.bmc.pm.log 00:00:43.102 08:13:55 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:43.102 08:13:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.102 08:13:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.102 08:13:55 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:43.102 08:13:55 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.102 Tue Jul 23 06:13:55 AM UTC 2024 00:00:43.102 08:13:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.102 v24.09-pre-297-gf7b31b2b9 00:00:43.102 08:13:55 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:43.102 08:13:55 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:43.102 08:13:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:43.102 08:13:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:43.102 08:13:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.102 ************************************ 00:00:43.102 START TEST asan 00:00:43.102 ************************************ 00:00:43.102 08:13:55 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:00:43.102 using asan 00:00:43.102 00:00:43.102 real 0m0.000s 00:00:43.102 user 0m0.000s 00:00:43.102 sys 0m0.000s 00:00:43.102 08:13:55 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:43.102 08:13:55 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.102 ************************************ 00:00:43.102 END TEST asan 00:00:43.102 ************************************ 00:00:43.102 08:13:55 -- common/autotest_common.sh@1142 -- $ return 0 00:00:43.102 08:13:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.102 08:13:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.102 08:13:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:43.102 08:13:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:43.102 08:13:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.102 ************************************ 00:00:43.102 START TEST ubsan 00:00:43.102 ************************************ 00:00:43.102 08:13:55 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:43.102 using ubsan 00:00:43.102 00:00:43.102 real 0m0.000s 00:00:43.102 user 0m0.000s 00:00:43.102 sys 0m0.000s 00:00:43.102 08:13:55 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:43.102 08:13:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.102 ************************************ 00:00:43.102 END TEST ubsan 00:00:43.102 ************************************ 00:00:43.102 08:13:55 -- common/autotest_common.sh@1142 -- $ return 0 00:00:43.361 08:13:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.361 08:13:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.361 08:13:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.361 08:13:55 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:43.361 08:13:55 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:43.361 08:13:55 -- common/autobuild_common.sh@435 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:43.361 08:13:55 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:00:43.361 08:13:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:43.361 08:13:55 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.361 ************************************ 00:00:43.361 START TEST autobuild_llvm_precompile 00:00:43.361 ************************************ 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autotest_common.sh@1123 -- $ _llvm_precompile 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:00:43.361 Target: x86_64-redhat-linux-gnu 00:00:43.361 Thread model: posix 00:00:43.361 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:00:43.361 08:13:55 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:00:43.620 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:43.620 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:43.878 Using 'verbs' RDMA provider 00:00:57.467 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.679 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.679 Creating mk/config.mk...done. 00:01:09.679 Creating mk/cc.flags.mk...done. 00:01:09.679 Type 'make' to build. 00:01:09.679 00:01:09.679 real 0m26.534s 00:01:09.679 user 0m12.637s 00:01:09.679 sys 0m12.982s 00:01:09.679 08:14:22 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:09.679 08:14:22 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:09.679 ************************************ 00:01:09.679 END TEST autobuild_llvm_precompile 00:01:09.679 ************************************ 00:01:09.939 08:14:22 -- common/autotest_common.sh@1142 -- $ return 0 00:01:09.939 08:14:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:09.939 08:14:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:09.939 08:14:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:09.939 08:14:22 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:09.939 08:14:22 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:10.198 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:10.198 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:10.456 Using 'verbs' RDMA provider 00:01:23.602 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:33.581 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:33.581 Creating mk/config.mk...done. 00:01:33.581 Creating mk/cc.flags.mk...done. 00:01:33.581 Type 'make' to build. 00:01:33.581 08:14:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:01:33.581 08:14:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:33.581 08:14:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:33.581 08:14:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:33.581 ************************************ 00:01:33.581 START TEST make 00:01:33.581 ************************************ 00:01:33.581 08:14:45 make -- common/autotest_common.sh@1123 -- $ make -j88 00:01:33.581 make[1]: Nothing to be done for 'all'. 00:01:34.523 The Meson build system 00:01:34.523 Version: 1.3.1 00:01:34.523 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:34.523 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.523 Build type: native build 00:01:34.523 Project name: libvfio-user 00:01:34.523 Project version: 0.0.1 00:01:34.523 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:34.523 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:34.523 Host machine cpu family: x86_64 00:01:34.523 Host machine cpu: x86_64 00:01:34.523 Run-time dependency threads found: YES 00:01:34.523 Library dl found: YES 00:01:34.523 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:34.523 Run-time dependency json-c found: YES 0.17 00:01:34.523 Run-time dependency cmocka found: YES 1.1.7 00:01:34.523 Program pytest-3 found: NO 00:01:34.523 Program flake8 found: NO 00:01:34.523 Program misspell-fixer found: NO 00:01:34.523 Program restructuredtext-lint found: NO 00:01:34.524 Program valgrind found: YES (/usr/bin/valgrind) 00:01:34.524 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.524 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.524 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.524 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.524 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:34.524 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:34.524 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:34.524 Build targets in project: 8 00:01:34.524 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:34.524 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:34.524 00:01:34.524 libvfio-user 0.0.1 00:01:34.524 00:01:34.524 User defined options 00:01:34.524 buildtype : debug 00:01:34.524 default_library: static 00:01:34.524 libdir : /usr/local/lib 00:01:34.524 00:01:34.524 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.091 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.091 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:35.091 [2/36] Compiling C object samples/null.p/null.c.o 00:01:35.091 [3/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:35.091 [4/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:35.091 [5/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:35.091 [6/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:35.091 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:35.091 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:35.091 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:35.091 [10/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:35.091 [11/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:35.091 [12/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:35.091 [13/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:35.091 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:35.091 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:35.091 [16/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:35.091 [17/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:35.091 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:35.091 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:35.091 [20/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:35.091 [21/36] Compiling C object samples/server.p/server.c.o 00:01:35.091 [22/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:35.091 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:35.091 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:35.091 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:35.091 [26/36] Compiling C object samples/client.p/client.c.o 00:01:35.091 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:35.091 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:35.091 [29/36] Linking static target lib/libvfio-user.a 00:01:35.091 [30/36] Linking target samples/client 00:01:35.091 [31/36] Linking target test/unit_tests 00:01:35.091 [32/36] Linking target samples/lspci 00:01:35.091 [33/36] Linking target samples/gpio-pci-idio-16 00:01:35.091 [34/36] Linking target samples/null 00:01:35.091 [35/36] Linking target samples/shadow_ioeventfd_server 00:01:35.091 [36/36] Linking target samples/server 00:01:35.350 INFO: autodetecting backend as ninja 00:01:35.350 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.350 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.608 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.608 ninja: no work to do. 00:01:40.881 The Meson build system 00:01:40.881 Version: 1.3.1 00:01:40.881 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:40.881 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:40.881 Build type: native build 00:01:40.881 Program cat found: YES (/usr/bin/cat) 00:01:40.881 Project name: DPDK 00:01:40.881 Project version: 24.03.0 00:01:40.881 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:01:40.881 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:01:40.881 Host machine cpu family: x86_64 00:01:40.881 Host machine cpu: x86_64 00:01:40.881 Message: ## Building in Developer Mode ## 00:01:40.881 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.881 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:40.881 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.881 Program python3 found: YES (/usr/bin/python3) 00:01:40.881 Program cat found: YES (/usr/bin/cat) 00:01:40.881 Compiler for C supports arguments -march=native: YES 00:01:40.881 Checking for size of "void *" : 8 00:01:40.881 Checking for size of "void *" : 8 (cached) 00:01:40.881 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:40.881 Library m found: YES 00:01:40.881 Library numa found: YES 00:01:40.881 Has header "numaif.h" : YES 00:01:40.881 Library fdt found: NO 00:01:40.881 Library execinfo found: NO 00:01:40.881 Has header "execinfo.h" : YES 00:01:40.881 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.881 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.881 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.881 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.881 Run-time dependency openssl found: YES 3.0.9 00:01:40.881 Run-time dependency libpcap found: YES 1.10.4 00:01:40.881 Has header "pcap.h" with dependency libpcap: YES 00:01:40.881 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.881 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.881 Compiler for C supports arguments -Wformat: YES 00:01:40.881 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:40.881 Compiler for C supports arguments -Wformat-security: YES 00:01:40.881 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.881 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.881 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.881 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.881 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.881 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.881 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.881 Compiler for C supports arguments -Wundef: YES 00:01:40.881 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.881 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.881 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:40.881 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.881 Program objdump found: YES (/usr/bin/objdump) 00:01:40.881 Compiler for C supports arguments -mavx512f: YES 00:01:40.881 Checking if "AVX512 checking" compiles: YES 00:01:40.881 Fetching value of define "__SSE4_2__" : 1 00:01:40.881 Fetching value of define "__AES__" : 1 00:01:40.881 Fetching value of define "__AVX__" : 1 00:01:40.881 Fetching value of define "__AVX2__" : 1 00:01:40.881 Fetching value of define "__AVX512BW__" : 1 00:01:40.882 Fetching value of define "__AVX512CD__" : 1 00:01:40.882 Fetching value of define "__AVX512DQ__" : 1 00:01:40.882 Fetching value of define "__AVX512F__" : 1 00:01:40.882 Fetching value of define "__AVX512VL__" : 1 00:01:40.882 Fetching value of define "__PCLMUL__" : 1 00:01:40.882 Fetching value of define "__RDRND__" : 1 00:01:40.882 Fetching value of define "__RDSEED__" : 1 00:01:40.882 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.882 Fetching value of define "__znver1__" : (undefined) 00:01:40.882 Fetching value of define "__znver2__" : (undefined) 00:01:40.882 Fetching value of define "__znver3__" : (undefined) 00:01:40.882 Fetching value of define "__znver4__" : (undefined) 00:01:40.882 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:40.882 Message: lib/log: Defining dependency "log" 00:01:40.882 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.882 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.882 Library rt found: YES 00:01:40.882 Checking for function "getentropy" : NO 00:01:40.882 Message: lib/eal: Defining dependency "eal" 00:01:40.882 Message: lib/ring: Defining dependency "ring" 00:01:40.882 Message: lib/rcu: Defining dependency "rcu" 00:01:40.882 Message: lib/mempool: Defining dependency "mempool" 00:01:40.882 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.882 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.882 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:40.882 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:40.882 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:40.882 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:40.882 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:40.882 Compiler for C supports arguments -mpclmul: YES 00:01:40.882 Compiler for C supports arguments -maes: YES 00:01:40.882 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.882 Compiler for C supports arguments -mavx512bw: YES 00:01:40.882 Compiler for C supports arguments -mavx512dq: YES 00:01:40.882 Compiler for C supports arguments -mavx512vl: YES 00:01:40.882 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.882 Compiler for C supports arguments -mavx2: YES 00:01:40.882 Compiler for C supports arguments -mavx: YES 00:01:40.882 Message: lib/net: Defining dependency "net" 00:01:40.882 Message: lib/meter: Defining dependency "meter" 00:01:40.882 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.882 Message: lib/pci: Defining dependency "pci" 00:01:40.882 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.882 Message: lib/hash: Defining dependency "hash" 00:01:40.882 Message: lib/timer: Defining dependency "timer" 00:01:40.882 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.882 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.882 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.882 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.882 Message: lib/power: Defining dependency "power" 00:01:40.882 Message: lib/reorder: Defining dependency "reorder" 00:01:40.882 Message: lib/security: Defining dependency "security" 00:01:40.882 Has header "linux/userfaultfd.h" : YES 00:01:40.882 Has header "linux/vduse.h" : YES 00:01:40.882 Message: lib/vhost: Defining dependency "vhost" 00:01:40.882 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:40.882 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:40.882 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:40.882 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:40.882 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:40.882 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:40.882 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:40.882 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:40.882 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:40.882 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:40.882 Program doxygen found: YES (/usr/bin/doxygen) 00:01:40.882 Configuring doxy-api-html.conf using configuration 00:01:40.882 Configuring doxy-api-man.conf using configuration 00:01:40.882 Program mandb found: YES (/usr/bin/mandb) 00:01:40.882 Program sphinx-build found: NO 00:01:40.882 Configuring rte_build_config.h using configuration 00:01:40.882 Message: 00:01:40.882 ================= 00:01:40.882 Applications Enabled 00:01:40.882 ================= 00:01:40.882 00:01:40.882 apps: 00:01:40.882 00:01:40.882 00:01:40.882 Message: 00:01:40.882 ================= 00:01:40.882 Libraries Enabled 00:01:40.882 ================= 00:01:40.882 00:01:40.882 libs: 00:01:40.882 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:40.882 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:40.882 cryptodev, dmadev, power, reorder, security, vhost, 00:01:40.882 00:01:40.882 Message: 00:01:40.882 =============== 00:01:40.882 Drivers Enabled 00:01:40.882 =============== 00:01:40.882 00:01:40.882 common: 00:01:40.882 00:01:40.882 bus: 00:01:40.882 pci, vdev, 00:01:40.882 mempool: 00:01:40.882 ring, 00:01:40.882 dma: 00:01:40.882 00:01:40.882 net: 00:01:40.882 00:01:40.882 crypto: 00:01:40.882 00:01:40.882 compress: 00:01:40.882 00:01:40.882 vdpa: 00:01:40.882 00:01:40.882 00:01:40.882 Message: 00:01:40.882 ================= 00:01:40.882 Content Skipped 00:01:40.882 ================= 00:01:40.882 00:01:40.882 apps: 00:01:40.882 dumpcap: explicitly disabled via build config 00:01:40.882 graph: explicitly disabled via build config 00:01:40.882 pdump: explicitly disabled via build config 00:01:40.882 proc-info: explicitly disabled via build config 00:01:40.882 test-acl: explicitly disabled via build config 00:01:40.882 test-bbdev: explicitly disabled via build config 00:01:40.882 test-cmdline: explicitly disabled via build config 00:01:40.882 test-compress-perf: explicitly disabled via build config 00:01:40.882 test-crypto-perf: explicitly disabled via build config 00:01:40.882 test-dma-perf: explicitly disabled via build config 00:01:40.882 test-eventdev: explicitly disabled via build config 00:01:40.882 test-fib: explicitly disabled via build config 00:01:40.882 test-flow-perf: explicitly disabled via build config 00:01:40.882 test-gpudev: explicitly disabled via build config 00:01:40.882 test-mldev: explicitly disabled via build config 00:01:40.882 test-pipeline: explicitly disabled via build config 00:01:40.882 test-pmd: explicitly disabled via build config 00:01:40.882 test-regex: explicitly disabled via build config 00:01:40.882 test-sad: explicitly disabled via build config 00:01:40.882 test-security-perf: explicitly disabled via build config 00:01:40.882 00:01:40.882 libs: 00:01:40.882 argparse: explicitly disabled via build config 00:01:40.882 metrics: explicitly disabled via build config 00:01:40.882 acl: explicitly disabled via build config 00:01:40.882 bbdev: explicitly disabled via build config 00:01:40.882 bitratestats: explicitly disabled via build config 00:01:40.882 bpf: explicitly disabled via build config 00:01:40.882 cfgfile: explicitly disabled via build config 00:01:40.882 distributor: explicitly disabled via build config 00:01:40.882 efd: explicitly disabled via build config 00:01:40.882 eventdev: explicitly disabled via build config 00:01:40.882 dispatcher: explicitly disabled via build config 00:01:40.882 gpudev: explicitly disabled via build config 00:01:40.882 gro: explicitly disabled via build config 00:01:40.882 gso: explicitly disabled via build config 00:01:40.882 ip_frag: explicitly disabled via build config 00:01:40.882 jobstats: explicitly disabled via build config 00:01:40.882 latencystats: explicitly disabled via build config 00:01:40.882 lpm: explicitly disabled via build config 00:01:40.882 member: explicitly disabled via build config 00:01:40.882 pcapng: explicitly disabled via build config 00:01:40.882 rawdev: explicitly disabled via build config 00:01:40.882 regexdev: explicitly disabled via build config 00:01:40.882 mldev: explicitly disabled via build config 00:01:40.882 rib: explicitly disabled via build config 00:01:40.882 sched: explicitly disabled via build config 00:01:40.882 stack: explicitly disabled via build config 00:01:40.882 ipsec: explicitly disabled via build config 00:01:40.882 pdcp: explicitly disabled via build config 00:01:40.882 fib: explicitly disabled via build config 00:01:40.882 port: explicitly disabled via build config 00:01:40.882 pdump: explicitly disabled via build config 00:01:40.882 table: explicitly disabled via build config 00:01:40.882 pipeline: explicitly disabled via build config 00:01:40.882 graph: explicitly disabled via build config 00:01:40.882 node: explicitly disabled via build config 00:01:40.882 00:01:40.882 drivers: 00:01:40.882 common/cpt: not in enabled drivers build config 00:01:40.882 common/dpaax: not in enabled drivers build config 00:01:40.882 common/iavf: not in enabled drivers build config 00:01:40.882 common/idpf: not in enabled drivers build config 00:01:40.882 common/ionic: not in enabled drivers build config 00:01:40.882 common/mvep: not in enabled drivers build config 00:01:40.882 common/octeontx: not in enabled drivers build config 00:01:40.882 bus/auxiliary: not in enabled drivers build config 00:01:40.882 bus/cdx: not in enabled drivers build config 00:01:40.882 bus/dpaa: not in enabled drivers build config 00:01:40.882 bus/fslmc: not in enabled drivers build config 00:01:40.882 bus/ifpga: not in enabled drivers build config 00:01:40.882 bus/platform: not in enabled drivers build config 00:01:40.882 bus/uacce: not in enabled drivers build config 00:01:40.882 bus/vmbus: not in enabled drivers build config 00:01:40.882 common/cnxk: not in enabled drivers build config 00:01:40.882 common/mlx5: not in enabled drivers build config 00:01:40.882 common/nfp: not in enabled drivers build config 00:01:40.882 common/nitrox: not in enabled drivers build config 00:01:40.882 common/qat: not in enabled drivers build config 00:01:40.882 common/sfc_efx: not in enabled drivers build config 00:01:40.882 mempool/bucket: not in enabled drivers build config 00:01:40.882 mempool/cnxk: not in enabled drivers build config 00:01:40.883 mempool/dpaa: not in enabled drivers build config 00:01:40.883 mempool/dpaa2: not in enabled drivers build config 00:01:40.883 mempool/octeontx: not in enabled drivers build config 00:01:40.883 mempool/stack: not in enabled drivers build config 00:01:40.883 dma/cnxk: not in enabled drivers build config 00:01:40.883 dma/dpaa: not in enabled drivers build config 00:01:40.883 dma/dpaa2: not in enabled drivers build config 00:01:40.883 dma/hisilicon: not in enabled drivers build config 00:01:40.883 dma/idxd: not in enabled drivers build config 00:01:40.883 dma/ioat: not in enabled drivers build config 00:01:40.883 dma/skeleton: not in enabled drivers build config 00:01:40.883 net/af_packet: not in enabled drivers build config 00:01:40.883 net/af_xdp: not in enabled drivers build config 00:01:40.883 net/ark: not in enabled drivers build config 00:01:40.883 net/atlantic: not in enabled drivers build config 00:01:40.883 net/avp: not in enabled drivers build config 00:01:40.883 net/axgbe: not in enabled drivers build config 00:01:40.883 net/bnx2x: not in enabled drivers build config 00:01:40.883 net/bnxt: not in enabled drivers build config 00:01:40.883 net/bonding: not in enabled drivers build config 00:01:40.883 net/cnxk: not in enabled drivers build config 00:01:40.883 net/cpfl: not in enabled drivers build config 00:01:40.883 net/cxgbe: not in enabled drivers build config 00:01:40.883 net/dpaa: not in enabled drivers build config 00:01:40.883 net/dpaa2: not in enabled drivers build config 00:01:40.883 net/e1000: not in enabled drivers build config 00:01:40.883 net/ena: not in enabled drivers build config 00:01:40.883 net/enetc: not in enabled drivers build config 00:01:40.883 net/enetfec: not in enabled drivers build config 00:01:40.883 net/enic: not in enabled drivers build config 00:01:40.883 net/failsafe: not in enabled drivers build config 00:01:40.883 net/fm10k: not in enabled drivers build config 00:01:40.883 net/gve: not in enabled drivers build config 00:01:40.883 net/hinic: not in enabled drivers build config 00:01:40.883 net/hns3: not in enabled drivers build config 00:01:40.883 net/i40e: not in enabled drivers build config 00:01:40.883 net/iavf: not in enabled drivers build config 00:01:40.883 net/ice: not in enabled drivers build config 00:01:40.883 net/idpf: not in enabled drivers build config 00:01:40.883 net/igc: not in enabled drivers build config 00:01:40.883 net/ionic: not in enabled drivers build config 00:01:40.883 net/ipn3ke: not in enabled drivers build config 00:01:40.883 net/ixgbe: not in enabled drivers build config 00:01:40.883 net/mana: not in enabled drivers build config 00:01:40.883 net/memif: not in enabled drivers build config 00:01:40.883 net/mlx4: not in enabled drivers build config 00:01:40.883 net/mlx5: not in enabled drivers build config 00:01:40.883 net/mvneta: not in enabled drivers build config 00:01:40.883 net/mvpp2: not in enabled drivers build config 00:01:40.883 net/netvsc: not in enabled drivers build config 00:01:40.883 net/nfb: not in enabled drivers build config 00:01:40.883 net/nfp: not in enabled drivers build config 00:01:40.883 net/ngbe: not in enabled drivers build config 00:01:40.883 net/null: not in enabled drivers build config 00:01:40.883 net/octeontx: not in enabled drivers build config 00:01:40.883 net/octeon_ep: not in enabled drivers build config 00:01:40.883 net/pcap: not in enabled drivers build config 00:01:40.883 net/pfe: not in enabled drivers build config 00:01:40.883 net/qede: not in enabled drivers build config 00:01:40.883 net/ring: not in enabled drivers build config 00:01:40.883 net/sfc: not in enabled drivers build config 00:01:40.883 net/softnic: not in enabled drivers build config 00:01:40.883 net/tap: not in enabled drivers build config 00:01:40.883 net/thunderx: not in enabled drivers build config 00:01:40.883 net/txgbe: not in enabled drivers build config 00:01:40.883 net/vdev_netvsc: not in enabled drivers build config 00:01:40.883 net/vhost: not in enabled drivers build config 00:01:40.883 net/virtio: not in enabled drivers build config 00:01:40.883 net/vmxnet3: not in enabled drivers build config 00:01:40.883 raw/*: missing internal dependency, "rawdev" 00:01:40.883 crypto/armv8: not in enabled drivers build config 00:01:40.883 crypto/bcmfs: not in enabled drivers build config 00:01:40.883 crypto/caam_jr: not in enabled drivers build config 00:01:40.883 crypto/ccp: not in enabled drivers build config 00:01:40.883 crypto/cnxk: not in enabled drivers build config 00:01:40.883 crypto/dpaa_sec: not in enabled drivers build config 00:01:40.883 crypto/dpaa2_sec: not in enabled drivers build config 00:01:40.883 crypto/ipsec_mb: not in enabled drivers build config 00:01:40.883 crypto/mlx5: not in enabled drivers build config 00:01:40.883 crypto/mvsam: not in enabled drivers build config 00:01:40.883 crypto/nitrox: not in enabled drivers build config 00:01:40.883 crypto/null: not in enabled drivers build config 00:01:40.883 crypto/octeontx: not in enabled drivers build config 00:01:40.883 crypto/openssl: not in enabled drivers build config 00:01:40.883 crypto/scheduler: not in enabled drivers build config 00:01:40.883 crypto/uadk: not in enabled drivers build config 00:01:40.883 crypto/virtio: not in enabled drivers build config 00:01:40.883 compress/isal: not in enabled drivers build config 00:01:40.883 compress/mlx5: not in enabled drivers build config 00:01:40.883 compress/nitrox: not in enabled drivers build config 00:01:40.883 compress/octeontx: not in enabled drivers build config 00:01:40.883 compress/zlib: not in enabled drivers build config 00:01:40.883 regex/*: missing internal dependency, "regexdev" 00:01:40.883 ml/*: missing internal dependency, "mldev" 00:01:40.883 vdpa/ifc: not in enabled drivers build config 00:01:40.883 vdpa/mlx5: not in enabled drivers build config 00:01:40.883 vdpa/nfp: not in enabled drivers build config 00:01:40.883 vdpa/sfc: not in enabled drivers build config 00:01:40.883 event/*: missing internal dependency, "eventdev" 00:01:40.883 baseband/*: missing internal dependency, "bbdev" 00:01:40.883 gpu/*: missing internal dependency, "gpudev" 00:01:40.883 00:01:40.883 00:01:40.883 Build targets in project: 85 00:01:40.883 00:01:40.883 DPDK 24.03.0 00:01:40.883 00:01:40.883 User defined options 00:01:40.883 buildtype : debug 00:01:40.883 default_library : static 00:01:40.883 libdir : lib 00:01:40.883 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:40.883 b_lundef : false 00:01:40.883 b_sanitize : address 00:01:40.883 c_args : -fPIC -Werror 00:01:40.883 c_link_args : 00:01:40.883 cpu_instruction_set: native 00:01:40.883 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:01:40.883 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:01:40.883 enable_docs : false 00:01:40.883 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:40.883 enable_kmods : false 00:01:40.883 max_lcores : 128 00:01:40.883 tests : false 00:01:40.883 00:01:40.883 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.461 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:41.461 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.461 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.461 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.461 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.461 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.461 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.461 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.461 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.461 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.461 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.461 [11/268] Linking static target lib/librte_kvargs.a 00:01:41.461 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.461 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:41.461 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.461 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.461 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.461 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.461 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.461 [19/268] Linking static target lib/librte_log.a 00:01:41.729 [20/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.007 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:42.007 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:42.007 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:42.007 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.007 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:42.007 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:42.007 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.007 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:42.007 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:42.007 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:42.007 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:42.007 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.007 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:42.007 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:42.007 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:42.007 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.007 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.007 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.007 [39/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:42.007 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.007 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.007 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.007 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.007 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.007 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:42.007 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:42.007 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:42.007 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:42.007 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.007 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:42.007 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:42.007 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:42.007 [53/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:42.007 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:42.007 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:42.007 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:42.007 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:42.007 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:42.007 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:42.007 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:42.007 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:42.007 [62/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:42.007 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:42.007 [64/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:42.007 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:42.007 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:42.007 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:42.007 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:42.007 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:42.007 [70/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:42.007 [71/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:42.007 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:42.007 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:42.007 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:42.007 [75/268] Linking static target lib/librte_pci.a 00:01:42.007 [76/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:42.007 [77/268] Linking static target lib/librte_meter.a 00:01:42.007 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:42.007 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:42.007 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:42.007 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:42.008 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:42.008 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:42.008 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:42.008 [85/268] Linking static target lib/librte_telemetry.a 00:01:42.008 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:42.008 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:42.008 [88/268] Linking static target lib/librte_ring.a 00:01:42.008 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:42.008 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:42.008 [91/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:42.008 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:42.008 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:42.008 [94/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.008 [95/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.303 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:42.303 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:42.303 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:42.303 [99/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.303 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.303 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:42.303 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:42.303 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:42.303 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:42.303 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:42.303 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.303 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:42.303 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:42.303 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:42.303 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:42.303 [111/268] Linking target lib/librte_log.so.24.1 00:01:42.303 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:42.303 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:42.303 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:42.303 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:42.303 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:42.303 [117/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:42.303 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:42.303 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:42.303 [120/268] Linking static target lib/librte_net.a 00:01:42.303 [121/268] Linking static target lib/librte_mempool.a 00:01:42.303 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:42.303 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:42.303 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:42.303 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:42.303 [126/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.303 [127/268] Linking static target lib/librte_eal.a 00:01:42.303 [128/268] Linking static target lib/librte_rcu.a 00:01:42.303 [129/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.303 [130/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.303 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:42.303 [132/268] Linking target lib/librte_kvargs.so.24.1 00:01:42.303 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.303 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:42.303 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:42.586 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:42.586 [137/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.586 [138/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.586 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.586 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:42.586 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.586 [142/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.586 [143/268] Linking static target lib/librte_mbuf.a 00:01:42.586 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.586 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:42.586 [146/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:42.586 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:42.586 [148/268] Linking target lib/librte_telemetry.so.24.1 00:01:42.586 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:42.586 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:42.586 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:42.586 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:42.586 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.586 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.586 [155/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:42.586 [156/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.586 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.586 [158/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:42.586 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.586 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:42.586 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:42.586 [162/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.586 [163/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:42.586 [164/268] Linking static target lib/librte_reorder.a 00:01:42.586 [165/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.586 [166/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:42.586 [167/268] Linking static target lib/librte_timer.a 00:01:42.586 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:42.586 [169/268] Linking static target lib/librte_cmdline.a 00:01:42.586 [170/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.586 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:42.586 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:42.586 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:42.850 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:42.850 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:42.850 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:42.850 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.850 [178/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:42.850 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:42.850 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:42.850 [181/268] Linking static target lib/librte_power.a 00:01:42.850 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:42.850 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:42.850 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.850 [185/268] Linking static target lib/librte_dmadev.a 00:01:42.850 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.850 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:42.850 [188/268] Linking static target lib/librte_compressdev.a 00:01:42.850 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.850 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.850 [191/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.850 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.850 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.850 [194/268] Linking static target lib/librte_security.a 00:01:42.850 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.850 [196/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.850 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.850 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:42.850 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.109 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.109 [201/268] Linking static target drivers/librte_bus_vdev.a 00:01:43.109 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:43.109 [203/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:43.109 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:43.109 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:43.109 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.109 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.109 [208/268] Linking static target lib/librte_hash.a 00:01:43.109 [209/268] Linking static target lib/librte_cryptodev.a 00:01:43.109 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:43.109 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.109 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.109 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.109 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:43.109 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.109 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.109 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.368 [218/268] Linking static target lib/librte_ethdev.a 00:01:43.368 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.368 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.368 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.368 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.627 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.627 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.885 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.885 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.885 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.820 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.079 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:45.079 [230/268] Linking static target lib/librte_vhost.a 00:01:46.986 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.255 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.514 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.514 [234/268] Linking target lib/librte_eal.so.24.1 00:01:52.772 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.772 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.772 [237/268] Linking target lib/librte_meter.so.24.1 00:01:52.772 [238/268] Linking target lib/librte_ring.so.24.1 00:01:52.772 [239/268] Linking target lib/librte_timer.so.24.1 00:01:52.772 [240/268] Linking target lib/librte_pci.so.24.1 00:01:52.772 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:52.772 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.772 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.772 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.772 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.772 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:53.030 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:53.030 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:53.030 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:53.030 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.030 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.030 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.030 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:53.289 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.289 [255/268] Linking target lib/librte_net.so.24.1 00:01:53.289 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:53.289 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:53.289 [258/268] Linking target lib/librte_reorder.so.24.1 00:01:53.548 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.548 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.548 [261/268] Linking target lib/librte_hash.so.24.1 00:01:53.548 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:53.548 [263/268] Linking target lib/librte_security.so.24.1 00:01:53.548 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:53.548 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.806 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.806 [267/268] Linking target lib/librte_power.so.24.1 00:01:53.806 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:53.806 INFO: autodetecting backend as ninja 00:01:53.806 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 88 00:01:54.742 CC lib/log/log.o 00:01:54.742 CC lib/log/log_deprecated.o 00:01:54.742 CC lib/log/log_flags.o 00:01:54.742 CC lib/ut/ut.o 00:01:54.742 CC lib/ut_mock/mock.o 00:01:54.742 LIB libspdk_log.a 00:01:54.742 LIB libspdk_ut_mock.a 00:01:54.742 LIB libspdk_ut.a 00:01:55.000 CXX lib/trace_parser/trace.o 00:01:55.000 CC lib/util/base64.o 00:01:55.000 CC lib/util/bit_array.o 00:01:55.000 CC lib/ioat/ioat.o 00:01:55.000 CC lib/util/cpuset.o 00:01:55.000 CC lib/util/crc16.o 00:01:55.000 CC lib/util/crc32.o 00:01:55.000 CC lib/dma/dma.o 00:01:55.000 CC lib/util/crc32c.o 00:01:55.000 CC lib/util/crc32_ieee.o 00:01:55.000 CC lib/util/crc64.o 00:01:55.000 CC lib/util/fd.o 00:01:55.000 CC lib/util/dif.o 00:01:55.000 CC lib/util/fd_group.o 00:01:55.000 CC lib/util/file.o 00:01:55.000 CC lib/util/hexlify.o 00:01:55.000 CC lib/util/iov.o 00:01:55.000 CC lib/util/math.o 00:01:55.000 CC lib/util/net.o 00:01:55.000 CC lib/util/pipe.o 00:01:55.000 CC lib/util/strerror_tls.o 00:01:55.000 CC lib/util/string.o 00:01:55.000 CC lib/util/uuid.o 00:01:55.000 CC lib/util/xor.o 00:01:55.000 CC lib/util/zipf.o 00:01:55.259 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.259 CC lib/vfio_user/host/vfio_user.o 00:01:55.259 LIB libspdk_dma.a 00:01:55.259 LIB libspdk_ioat.a 00:01:55.259 LIB libspdk_vfio_user.a 00:01:55.517 LIB libspdk_util.a 00:01:55.517 LIB libspdk_trace_parser.a 00:01:55.775 CC lib/conf/conf.o 00:01:55.775 CC lib/vmd/vmd.o 00:01:55.775 CC lib/vmd/led.o 00:01:55.775 CC lib/rdma_utils/rdma_utils.o 00:01:55.775 CC lib/json/json_parse.o 00:01:55.775 CC lib/json/json_util.o 00:01:55.775 CC lib/json/json_write.o 00:01:55.775 CC lib/idxd/idxd.o 00:01:55.775 CC lib/idxd/idxd_user.o 00:01:55.775 CC lib/idxd/idxd_kernel.o 00:01:55.775 CC lib/rdma_provider/common.o 00:01:55.775 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:55.775 CC lib/env_dpdk/env.o 00:01:55.775 CC lib/env_dpdk/memory.o 00:01:55.775 CC lib/env_dpdk/pci.o 00:01:55.775 CC lib/env_dpdk/init.o 00:01:55.775 CC lib/env_dpdk/threads.o 00:01:55.775 CC lib/env_dpdk/pci_ioat.o 00:01:55.775 CC lib/env_dpdk/pci_virtio.o 00:01:55.775 CC lib/env_dpdk/pci_vmd.o 00:01:55.775 CC lib/env_dpdk/pci_idxd.o 00:01:55.775 CC lib/env_dpdk/pci_event.o 00:01:55.775 CC lib/env_dpdk/sigbus_handler.o 00:01:55.775 CC lib/env_dpdk/pci_dpdk.o 00:01:55.775 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.775 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.775 LIB libspdk_rdma_provider.a 00:01:56.034 LIB libspdk_conf.a 00:01:56.034 LIB libspdk_rdma_utils.a 00:01:56.034 LIB libspdk_json.a 00:01:56.034 LIB libspdk_idxd.a 00:01:56.292 LIB libspdk_vmd.a 00:01:56.292 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.292 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.292 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.292 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.551 LIB libspdk_jsonrpc.a 00:01:56.810 CC lib/rpc/rpc.o 00:01:56.810 LIB libspdk_env_dpdk.a 00:01:56.810 LIB libspdk_rpc.a 00:01:57.068 CC lib/notify/notify.o 00:01:57.068 CC lib/notify/notify_rpc.o 00:01:57.068 CC lib/trace/trace.o 00:01:57.068 CC lib/trace/trace_flags.o 00:01:57.068 CC lib/trace/trace_rpc.o 00:01:57.068 CC lib/keyring/keyring.o 00:01:57.068 CC lib/keyring/keyring_rpc.o 00:01:57.328 LIB libspdk_notify.a 00:01:57.328 LIB libspdk_trace.a 00:01:57.328 LIB libspdk_keyring.a 00:01:57.586 CC lib/thread/thread.o 00:01:57.586 CC lib/sock/sock.o 00:01:57.586 CC lib/thread/iobuf.o 00:01:57.586 CC lib/sock/sock_rpc.o 00:01:57.846 LIB libspdk_sock.a 00:01:58.104 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.104 CC lib/nvme/nvme_ctrlr.o 00:01:58.104 CC lib/nvme/nvme_fabric.o 00:01:58.104 CC lib/nvme/nvme_ns_cmd.o 00:01:58.104 CC lib/nvme/nvme_ns.o 00:01:58.104 CC lib/nvme/nvme_pcie_common.o 00:01:58.104 CC lib/nvme/nvme_pcie.o 00:01:58.104 CC lib/nvme/nvme_qpair.o 00:01:58.104 CC lib/nvme/nvme.o 00:01:58.104 CC lib/nvme/nvme_quirks.o 00:01:58.104 CC lib/nvme/nvme_transport.o 00:01:58.104 CC lib/nvme/nvme_discovery.o 00:01:58.104 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.104 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.104 CC lib/nvme/nvme_tcp.o 00:01:58.104 CC lib/nvme/nvme_opal.o 00:01:58.104 CC lib/nvme/nvme_io_msg.o 00:01:58.104 CC lib/nvme/nvme_poll_group.o 00:01:58.104 CC lib/nvme/nvme_zns.o 00:01:58.104 CC lib/nvme/nvme_stubs.o 00:01:58.104 CC lib/nvme/nvme_auth.o 00:01:58.104 CC lib/nvme/nvme_cuse.o 00:01:58.104 CC lib/nvme/nvme_vfio_user.o 00:01:58.104 CC lib/nvme/nvme_rdma.o 00:01:58.672 LIB libspdk_thread.a 00:01:58.930 CC lib/accel/accel.o 00:01:58.930 CC lib/init/json_config.o 00:01:58.930 CC lib/init/subsystem.o 00:01:58.930 CC lib/accel/accel_rpc.o 00:01:58.930 CC lib/accel/accel_sw.o 00:01:58.930 CC lib/init/rpc.o 00:01:58.931 CC lib/init/subsystem_rpc.o 00:01:58.931 CC lib/blob/blobstore.o 00:01:58.931 CC lib/blob/request.o 00:01:58.931 CC lib/blob/zeroes.o 00:01:58.931 CC lib/blob/blob_bs_dev.o 00:01:58.931 CC lib/vfu_tgt/tgt_endpoint.o 00:01:58.931 CC lib/vfu_tgt/tgt_rpc.o 00:01:58.931 CC lib/virtio/virtio.o 00:01:58.931 CC lib/virtio/virtio_vhost_user.o 00:01:58.931 CC lib/virtio/virtio_vfio_user.o 00:01:58.931 CC lib/virtio/virtio_pci.o 00:01:58.931 LIB libspdk_init.a 00:01:59.188 LIB libspdk_vfu_tgt.a 00:01:59.188 LIB libspdk_virtio.a 00:01:59.188 CC lib/event/app.o 00:01:59.188 CC lib/event/reactor.o 00:01:59.188 CC lib/event/log_rpc.o 00:01:59.188 CC lib/event/app_rpc.o 00:01:59.188 CC lib/event/scheduler_static.o 00:01:59.447 LIB libspdk_event.a 00:01:59.705 LIB libspdk_accel.a 00:01:59.705 LIB libspdk_nvme.a 00:01:59.964 CC lib/bdev/bdev.o 00:01:59.964 CC lib/bdev/bdev_rpc.o 00:01:59.964 CC lib/bdev/bdev_zone.o 00:01:59.964 CC lib/bdev/part.o 00:01:59.964 CC lib/bdev/scsi_nvme.o 00:02:00.901 LIB libspdk_blob.a 00:02:01.160 CC lib/lvol/lvol.o 00:02:01.160 CC lib/blobfs/blobfs.o 00:02:01.160 CC lib/blobfs/tree.o 00:02:01.727 LIB libspdk_lvol.a 00:02:01.727 LIB libspdk_blobfs.a 00:02:01.727 LIB libspdk_bdev.a 00:02:01.986 CC lib/ftl/ftl_core.o 00:02:01.986 CC lib/scsi/dev.o 00:02:01.986 CC lib/ftl/ftl_init.o 00:02:01.986 CC lib/scsi/lun.o 00:02:01.986 CC lib/nvmf/ctrlr.o 00:02:01.986 CC lib/ftl/ftl_layout.o 00:02:01.986 CC lib/ftl/ftl_debug.o 00:02:01.986 CC lib/scsi/port.o 00:02:01.986 CC lib/nvmf/ctrlr_discovery.o 00:02:01.986 CC lib/scsi/scsi.o 00:02:01.986 CC lib/ftl/ftl_io.o 00:02:01.986 CC lib/nvmf/ctrlr_bdev.o 00:02:01.986 CC lib/scsi/scsi_bdev.o 00:02:01.986 CC lib/ftl/ftl_sb.o 00:02:01.986 CC lib/nvmf/subsystem.o 00:02:01.986 CC lib/ftl/ftl_l2p.o 00:02:01.986 CC lib/scsi/scsi_pr.o 00:02:01.986 CC lib/ublk/ublk.o 00:02:02.245 CC lib/nvmf/nvmf_rpc.o 00:02:02.245 CC lib/nvmf/nvmf.o 00:02:02.245 CC lib/ublk/ublk_rpc.o 00:02:02.245 CC lib/ftl/ftl_l2p_flat.o 00:02:02.245 CC lib/scsi/scsi_rpc.o 00:02:02.245 CC lib/ftl/ftl_nv_cache.o 00:02:02.245 CC lib/scsi/task.o 00:02:02.245 CC lib/nbd/nbd.o 00:02:02.245 CC lib/nvmf/transport.o 00:02:02.245 CC lib/ftl/ftl_band.o 00:02:02.245 CC lib/nbd/nbd_rpc.o 00:02:02.245 CC lib/ftl/ftl_band_ops.o 00:02:02.245 CC lib/nvmf/tcp.o 00:02:02.245 CC lib/nvmf/stubs.o 00:02:02.245 CC lib/nvmf/mdns_server.o 00:02:02.245 CC lib/ftl/ftl_writer.o 00:02:02.245 CC lib/nvmf/vfio_user.o 00:02:02.245 CC lib/nvmf/rdma.o 00:02:02.245 CC lib/ftl/ftl_rq.o 00:02:02.245 CC lib/ftl/ftl_reloc.o 00:02:02.245 CC lib/nvmf/auth.o 00:02:02.245 CC lib/ftl/ftl_l2p_cache.o 00:02:02.245 CC lib/ftl/ftl_p2l.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:02.245 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:02.245 CC lib/ftl/utils/ftl_conf.o 00:02:02.245 CC lib/ftl/utils/ftl_md.o 00:02:02.245 CC lib/ftl/utils/ftl_bitmap.o 00:02:02.245 CC lib/ftl/utils/ftl_mempool.o 00:02:02.245 CC lib/ftl/utils/ftl_property.o 00:02:02.245 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:02.245 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:02.245 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:02.245 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:02.245 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:02.245 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:02.245 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:02.245 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:02.245 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:02.245 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:02.245 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:02.245 CC lib/ftl/base/ftl_base_dev.o 00:02:02.245 CC lib/ftl/base/ftl_base_bdev.o 00:02:02.245 CC lib/ftl/ftl_trace.o 00:02:02.504 LIB libspdk_nbd.a 00:02:02.762 LIB libspdk_scsi.a 00:02:02.762 LIB libspdk_ublk.a 00:02:03.021 CC lib/vhost/vhost.o 00:02:03.021 CC lib/vhost/vhost_rpc.o 00:02:03.021 CC lib/vhost/vhost_scsi.o 00:02:03.021 CC lib/vhost/vhost_blk.o 00:02:03.021 CC lib/vhost/rte_vhost_user.o 00:02:03.021 CC lib/iscsi/conn.o 00:02:03.021 CC lib/iscsi/init_grp.o 00:02:03.021 CC lib/iscsi/iscsi.o 00:02:03.021 CC lib/iscsi/md5.o 00:02:03.021 CC lib/iscsi/param.o 00:02:03.021 CC lib/iscsi/portal_grp.o 00:02:03.021 CC lib/iscsi/tgt_node.o 00:02:03.021 CC lib/iscsi/iscsi_subsystem.o 00:02:03.021 CC lib/iscsi/iscsi_rpc.o 00:02:03.021 CC lib/iscsi/task.o 00:02:03.021 LIB libspdk_ftl.a 00:02:03.588 LIB libspdk_nvmf.a 00:02:03.588 LIB libspdk_vhost.a 00:02:03.846 LIB libspdk_iscsi.a 00:02:04.412 CC module/vfu_device/vfu_virtio.o 00:02:04.412 CC module/vfu_device/vfu_virtio_rpc.o 00:02:04.412 CC module/vfu_device/vfu_virtio_blk.o 00:02:04.412 CC module/vfu_device/vfu_virtio_scsi.o 00:02:04.412 CC module/env_dpdk/env_dpdk_rpc.o 00:02:04.412 CC module/keyring/linux/keyring_rpc.o 00:02:04.412 CC module/keyring/linux/keyring.o 00:02:04.412 CC module/scheduler/gscheduler/gscheduler.o 00:02:04.412 CC module/blob/bdev/blob_bdev.o 00:02:04.412 CC module/accel/error/accel_error_rpc.o 00:02:04.412 CC module/accel/error/accel_error.o 00:02:04.412 CC module/accel/iaa/accel_iaa.o 00:02:04.412 LIB libspdk_env_dpdk_rpc.a 00:02:04.412 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:04.412 CC module/accel/iaa/accel_iaa_rpc.o 00:02:04.412 CC module/keyring/file/keyring.o 00:02:04.412 CC module/keyring/file/keyring_rpc.o 00:02:04.412 CC module/accel/dsa/accel_dsa.o 00:02:04.412 CC module/accel/dsa/accel_dsa_rpc.o 00:02:04.412 CC module/sock/posix/posix.o 00:02:04.412 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:04.412 CC module/accel/ioat/accel_ioat.o 00:02:04.412 CC module/accel/ioat/accel_ioat_rpc.o 00:02:04.412 LIB libspdk_keyring_linux.a 00:02:04.412 LIB libspdk_scheduler_gscheduler.a 00:02:04.412 LIB libspdk_keyring_file.a 00:02:04.412 LIB libspdk_scheduler_dpdk_governor.a 00:02:04.412 LIB libspdk_accel_error.a 00:02:04.412 LIB libspdk_scheduler_dynamic.a 00:02:04.670 LIB libspdk_accel_iaa.a 00:02:04.670 LIB libspdk_accel_ioat.a 00:02:04.670 LIB libspdk_blob_bdev.a 00:02:04.670 LIB libspdk_accel_dsa.a 00:02:04.670 LIB libspdk_vfu_device.a 00:02:04.929 LIB libspdk_sock_posix.a 00:02:04.929 CC module/bdev/gpt/vbdev_gpt.o 00:02:04.929 CC module/bdev/gpt/gpt.o 00:02:04.929 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:04.929 CC module/bdev/lvol/vbdev_lvol.o 00:02:04.929 CC module/bdev/split/vbdev_split_rpc.o 00:02:04.929 CC module/bdev/split/vbdev_split.o 00:02:04.929 CC module/bdev/raid/bdev_raid_sb.o 00:02:04.929 CC module/bdev/delay/vbdev_delay.o 00:02:04.929 CC module/bdev/raid/bdev_raid_rpc.o 00:02:04.929 CC module/bdev/raid/bdev_raid.o 00:02:04.929 CC module/bdev/iscsi/bdev_iscsi.o 00:02:04.929 CC module/bdev/raid/raid0.o 00:02:04.929 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:04.929 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:04.929 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:04.929 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:04.929 CC module/bdev/raid/raid1.o 00:02:04.929 CC module/bdev/raid/concat.o 00:02:04.929 CC module/bdev/ftl/bdev_ftl.o 00:02:04.929 CC module/bdev/malloc/bdev_malloc.o 00:02:04.929 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:04.929 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:04.929 CC module/bdev/nvme/bdev_nvme.o 00:02:04.929 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:04.929 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:04.929 CC module/bdev/aio/bdev_aio.o 00:02:04.929 CC module/bdev/aio/bdev_aio_rpc.o 00:02:04.929 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:04.929 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:04.929 CC module/bdev/nvme/nvme_rpc.o 00:02:04.929 CC module/bdev/error/vbdev_error.o 00:02:04.929 CC module/blobfs/bdev/blobfs_bdev.o 00:02:04.929 CC module/bdev/nvme/bdev_mdns_client.o 00:02:04.929 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:04.929 CC module/bdev/error/vbdev_error_rpc.o 00:02:04.929 CC module/bdev/nvme/vbdev_opal.o 00:02:04.929 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:04.929 CC module/bdev/passthru/vbdev_passthru.o 00:02:04.929 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:04.929 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:04.929 CC module/bdev/null/bdev_null.o 00:02:04.929 CC module/bdev/null/bdev_null_rpc.o 00:02:05.188 LIB libspdk_blobfs_bdev.a 00:02:05.188 LIB libspdk_bdev_split.a 00:02:05.188 LIB libspdk_bdev_gpt.a 00:02:05.188 LIB libspdk_bdev_error.a 00:02:05.188 LIB libspdk_bdev_null.a 00:02:05.188 LIB libspdk_bdev_ftl.a 00:02:05.188 LIB libspdk_bdev_passthru.a 00:02:05.188 LIB libspdk_bdev_aio.a 00:02:05.188 LIB libspdk_bdev_zone_block.a 00:02:05.188 LIB libspdk_bdev_iscsi.a 00:02:05.188 LIB libspdk_bdev_delay.a 00:02:05.188 LIB libspdk_bdev_malloc.a 00:02:05.447 LIB libspdk_bdev_lvol.a 00:02:05.447 LIB libspdk_bdev_virtio.a 00:02:05.705 LIB libspdk_bdev_raid.a 00:02:06.643 LIB libspdk_bdev_nvme.a 00:02:07.212 CC module/event/subsystems/keyring/keyring.o 00:02:07.212 CC module/event/subsystems/vmd/vmd.o 00:02:07.212 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:07.212 CC module/event/subsystems/scheduler/scheduler.o 00:02:07.212 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:07.212 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:07.212 CC module/event/subsystems/sock/sock.o 00:02:07.212 CC module/event/subsystems/iobuf/iobuf.o 00:02:07.212 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:07.212 LIB libspdk_event_keyring.a 00:02:07.212 LIB libspdk_event_scheduler.a 00:02:07.212 LIB libspdk_event_vfu_tgt.a 00:02:07.212 LIB libspdk_event_vhost_blk.a 00:02:07.212 LIB libspdk_event_vmd.a 00:02:07.212 LIB libspdk_event_sock.a 00:02:07.212 LIB libspdk_event_iobuf.a 00:02:07.471 CC module/event/subsystems/accel/accel.o 00:02:07.471 LIB libspdk_event_accel.a 00:02:07.730 CC module/event/subsystems/bdev/bdev.o 00:02:07.989 LIB libspdk_event_bdev.a 00:02:08.247 CC module/event/subsystems/nbd/nbd.o 00:02:08.247 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:08.247 CC module/event/subsystems/scsi/scsi.o 00:02:08.247 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:08.247 CC module/event/subsystems/ublk/ublk.o 00:02:08.247 LIB libspdk_event_nbd.a 00:02:08.247 LIB libspdk_event_scsi.a 00:02:08.247 LIB libspdk_event_ublk.a 00:02:08.507 LIB libspdk_event_nvmf.a 00:02:08.766 CC module/event/subsystems/iscsi/iscsi.o 00:02:08.766 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:08.766 LIB libspdk_event_vhost_scsi.a 00:02:08.766 LIB libspdk_event_iscsi.a 00:02:09.027 CC app/trace_record/trace_record.o 00:02:09.027 CXX app/trace/trace.o 00:02:09.027 CC app/spdk_top/spdk_top.o 00:02:09.027 CC app/spdk_nvme_discover/discovery_aer.o 00:02:09.027 CC app/spdk_nvme_perf/perf.o 00:02:09.027 CC app/spdk_nvme_identify/identify.o 00:02:09.027 CC app/spdk_lspci/spdk_lspci.o 00:02:09.027 TEST_HEADER include/spdk/accel.h 00:02:09.027 TEST_HEADER include/spdk/assert.h 00:02:09.027 TEST_HEADER include/spdk/accel_module.h 00:02:09.027 TEST_HEADER include/spdk/bdev.h 00:02:09.027 TEST_HEADER include/spdk/barrier.h 00:02:09.027 TEST_HEADER include/spdk/base64.h 00:02:09.027 CC test/rpc_client/rpc_client_test.o 00:02:09.027 TEST_HEADER include/spdk/bdev_module.h 00:02:09.027 TEST_HEADER include/spdk/bdev_zone.h 00:02:09.027 TEST_HEADER include/spdk/bit_array.h 00:02:09.027 TEST_HEADER include/spdk/bit_pool.h 00:02:09.027 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:09.027 TEST_HEADER include/spdk/blob_bdev.h 00:02:09.027 TEST_HEADER include/spdk/blobfs.h 00:02:09.027 TEST_HEADER include/spdk/blob.h 00:02:09.027 TEST_HEADER include/spdk/config.h 00:02:09.027 TEST_HEADER include/spdk/conf.h 00:02:09.027 TEST_HEADER include/spdk/crc16.h 00:02:09.027 TEST_HEADER include/spdk/cpuset.h 00:02:09.027 TEST_HEADER include/spdk/crc32.h 00:02:09.027 TEST_HEADER include/spdk/dif.h 00:02:09.027 TEST_HEADER include/spdk/dma.h 00:02:09.027 TEST_HEADER include/spdk/crc64.h 00:02:09.027 TEST_HEADER include/spdk/endian.h 00:02:09.027 TEST_HEADER include/spdk/env_dpdk.h 00:02:09.027 TEST_HEADER include/spdk/env.h 00:02:09.027 TEST_HEADER include/spdk/fd_group.h 00:02:09.027 TEST_HEADER include/spdk/fd.h 00:02:09.027 TEST_HEADER include/spdk/event.h 00:02:09.027 TEST_HEADER include/spdk/gpt_spec.h 00:02:09.027 TEST_HEADER include/spdk/ftl.h 00:02:09.027 TEST_HEADER include/spdk/file.h 00:02:09.027 TEST_HEADER include/spdk/hexlify.h 00:02:09.027 TEST_HEADER include/spdk/histogram_data.h 00:02:09.027 TEST_HEADER include/spdk/idxd_spec.h 00:02:09.027 TEST_HEADER include/spdk/idxd.h 00:02:09.027 TEST_HEADER include/spdk/ioat.h 00:02:09.027 TEST_HEADER include/spdk/ioat_spec.h 00:02:09.027 TEST_HEADER include/spdk/init.h 00:02:09.027 TEST_HEADER include/spdk/iscsi_spec.h 00:02:09.027 TEST_HEADER include/spdk/jsonrpc.h 00:02:09.027 TEST_HEADER include/spdk/json.h 00:02:09.027 TEST_HEADER include/spdk/keyring.h 00:02:09.027 TEST_HEADER include/spdk/keyring_module.h 00:02:09.027 TEST_HEADER include/spdk/likely.h 00:02:09.027 TEST_HEADER include/spdk/log.h 00:02:09.027 TEST_HEADER include/spdk/lvol.h 00:02:09.027 CC app/spdk_dd/spdk_dd.o 00:02:09.027 TEST_HEADER include/spdk/mmio.h 00:02:09.027 TEST_HEADER include/spdk/nbd.h 00:02:09.027 TEST_HEADER include/spdk/net.h 00:02:09.027 TEST_HEADER include/spdk/notify.h 00:02:09.027 TEST_HEADER include/spdk/memory.h 00:02:09.027 TEST_HEADER include/spdk/nvme_intel.h 00:02:09.027 TEST_HEADER include/spdk/nvme.h 00:02:09.027 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:09.027 TEST_HEADER include/spdk/nvme_spec.h 00:02:09.027 CC app/iscsi_tgt/iscsi_tgt.o 00:02:09.027 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:09.027 TEST_HEADER include/spdk/nvme_zns.h 00:02:09.027 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:09.027 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:09.027 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:09.027 TEST_HEADER include/spdk/nvmf_spec.h 00:02:09.027 TEST_HEADER include/spdk/nvmf.h 00:02:09.027 TEST_HEADER include/spdk/nvmf_transport.h 00:02:09.027 TEST_HEADER include/spdk/opal.h 00:02:09.027 TEST_HEADER include/spdk/opal_spec.h 00:02:09.027 TEST_HEADER include/spdk/pci_ids.h 00:02:09.027 TEST_HEADER include/spdk/pipe.h 00:02:09.027 TEST_HEADER include/spdk/queue.h 00:02:09.027 TEST_HEADER include/spdk/reduce.h 00:02:09.027 TEST_HEADER include/spdk/rpc.h 00:02:09.027 TEST_HEADER include/spdk/scsi.h 00:02:09.027 CC app/nvmf_tgt/nvmf_main.o 00:02:09.027 TEST_HEADER include/spdk/scheduler.h 00:02:09.027 TEST_HEADER include/spdk/sock.h 00:02:09.027 TEST_HEADER include/spdk/scsi_spec.h 00:02:09.027 TEST_HEADER include/spdk/stdinc.h 00:02:09.027 TEST_HEADER include/spdk/string.h 00:02:09.027 TEST_HEADER include/spdk/thread.h 00:02:09.027 CC app/spdk_tgt/spdk_tgt.o 00:02:09.027 TEST_HEADER include/spdk/trace_parser.h 00:02:09.027 TEST_HEADER include/spdk/trace.h 00:02:09.027 TEST_HEADER include/spdk/ublk.h 00:02:09.027 TEST_HEADER include/spdk/tree.h 00:02:09.027 TEST_HEADER include/spdk/util.h 00:02:09.027 TEST_HEADER include/spdk/uuid.h 00:02:09.027 TEST_HEADER include/spdk/version.h 00:02:09.027 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:09.027 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:09.027 TEST_HEADER include/spdk/vmd.h 00:02:09.027 TEST_HEADER include/spdk/vhost.h 00:02:09.027 TEST_HEADER include/spdk/zipf.h 00:02:09.027 TEST_HEADER include/spdk/xor.h 00:02:09.027 CXX test/cpp_headers/accel_module.o 00:02:09.027 CXX test/cpp_headers/assert.o 00:02:09.027 CXX test/cpp_headers/barrier.o 00:02:09.027 CXX test/cpp_headers/accel.o 00:02:09.027 CXX test/cpp_headers/bdev.o 00:02:09.027 CXX test/cpp_headers/base64.o 00:02:09.027 CXX test/cpp_headers/bdev_module.o 00:02:09.027 CXX test/cpp_headers/bdev_zone.o 00:02:09.027 CXX test/cpp_headers/bit_array.o 00:02:09.027 CXX test/cpp_headers/bit_pool.o 00:02:09.027 CXX test/cpp_headers/blobfs.o 00:02:09.027 CXX test/cpp_headers/blob_bdev.o 00:02:09.027 CXX test/cpp_headers/blobfs_bdev.o 00:02:09.027 CXX test/cpp_headers/conf.o 00:02:09.027 CXX test/cpp_headers/blob.o 00:02:09.027 CXX test/cpp_headers/config.o 00:02:09.027 CXX test/cpp_headers/crc16.o 00:02:09.027 CXX test/cpp_headers/cpuset.o 00:02:09.027 CXX test/cpp_headers/crc64.o 00:02:09.027 CXX test/cpp_headers/crc32.o 00:02:09.027 CXX test/cpp_headers/dif.o 00:02:09.027 CXX test/cpp_headers/dma.o 00:02:09.027 CXX test/cpp_headers/endian.o 00:02:09.027 CXX test/cpp_headers/env_dpdk.o 00:02:09.027 CXX test/cpp_headers/env.o 00:02:09.027 CXX test/cpp_headers/event.o 00:02:09.027 CXX test/cpp_headers/fd_group.o 00:02:09.027 CXX test/cpp_headers/file.o 00:02:09.028 CXX test/cpp_headers/fd.o 00:02:09.028 CXX test/cpp_headers/ftl.o 00:02:09.028 CXX test/cpp_headers/gpt_spec.o 00:02:09.028 CXX test/cpp_headers/hexlify.o 00:02:09.028 CXX test/cpp_headers/idxd.o 00:02:09.028 CXX test/cpp_headers/histogram_data.o 00:02:09.028 CXX test/cpp_headers/idxd_spec.o 00:02:09.028 CXX test/cpp_headers/init.o 00:02:09.028 CXX test/cpp_headers/ioat.o 00:02:09.028 CXX test/cpp_headers/ioat_spec.o 00:02:09.028 CXX test/cpp_headers/iscsi_spec.o 00:02:09.028 CXX test/cpp_headers/json.o 00:02:09.028 CXX test/cpp_headers/jsonrpc.o 00:02:09.028 CXX test/cpp_headers/keyring.o 00:02:09.028 CXX test/cpp_headers/keyring_module.o 00:02:09.028 CXX test/cpp_headers/likely.o 00:02:09.028 CXX test/cpp_headers/log.o 00:02:09.028 CXX test/cpp_headers/lvol.o 00:02:09.028 CXX test/cpp_headers/memory.o 00:02:09.028 CXX test/cpp_headers/mmio.o 00:02:09.028 CXX test/cpp_headers/nbd.o 00:02:09.028 CXX test/cpp_headers/net.o 00:02:09.288 CXX test/cpp_headers/notify.o 00:02:09.288 CXX test/cpp_headers/nvme.o 00:02:09.288 CXX test/cpp_headers/nvme_intel.o 00:02:09.288 CXX test/cpp_headers/nvme_ocssd.o 00:02:09.288 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:09.288 CXX test/cpp_headers/nvme_spec.o 00:02:09.288 CXX test/cpp_headers/nvme_zns.o 00:02:09.288 CC test/app/histogram_perf/histogram_perf.o 00:02:09.288 CC test/thread/lock/spdk_lock.o 00:02:09.288 CC test/thread/poller_perf/poller_perf.o 00:02:09.288 CC examples/util/zipf/zipf.o 00:02:09.288 CC test/env/vtophys/vtophys.o 00:02:09.288 CC test/env/pci/pci_ut.o 00:02:09.288 CC test/app/stub/stub.o 00:02:09.288 CC test/app/jsoncat/jsoncat.o 00:02:09.288 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:09.288 CC examples/ioat/verify/verify.o 00:02:09.288 CC test/env/memory/memory_ut.o 00:02:09.288 CC app/fio/nvme/fio_plugin.o 00:02:09.288 CC examples/ioat/perf/perf.o 00:02:09.288 CXX test/cpp_headers/nvmf_cmd.o 00:02:09.288 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:09.288 LINK spdk_lspci 00:02:09.288 CC test/app/bdev_svc/bdev_svc.o 00:02:09.288 CC app/fio/bdev/fio_plugin.o 00:02:09.288 CC test/dma/test_dma/test_dma.o 00:02:09.288 CC test/env/mem_callbacks/mem_callbacks.o 00:02:09.288 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:09.288 LINK spdk_nvme_discover 00:02:09.288 LINK rpc_client_test 00:02:09.288 CXX test/cpp_headers/nvmf.o 00:02:09.288 LINK spdk_trace_record 00:02:09.288 CXX test/cpp_headers/nvmf_spec.o 00:02:09.288 CXX test/cpp_headers/nvmf_transport.o 00:02:09.288 CXX test/cpp_headers/opal.o 00:02:09.288 CXX test/cpp_headers/opal_spec.o 00:02:09.288 CXX test/cpp_headers/pci_ids.o 00:02:09.288 CXX test/cpp_headers/pipe.o 00:02:09.288 CXX test/cpp_headers/queue.o 00:02:09.288 CXX test/cpp_headers/reduce.o 00:02:09.288 CXX test/cpp_headers/rpc.o 00:02:09.288 CXX test/cpp_headers/scheduler.o 00:02:09.288 LINK jsoncat 00:02:09.288 LINK interrupt_tgt 00:02:09.288 LINK vtophys 00:02:09.288 CXX test/cpp_headers/scsi.o 00:02:09.288 LINK zipf 00:02:09.288 LINK nvmf_tgt 00:02:09.288 CXX test/cpp_headers/scsi_spec.o 00:02:09.288 CXX test/cpp_headers/sock.o 00:02:09.288 LINK poller_perf 00:02:09.288 CXX test/cpp_headers/stdinc.o 00:02:09.288 LINK histogram_perf 00:02:09.288 CXX test/cpp_headers/string.o 00:02:09.288 CXX test/cpp_headers/thread.o 00:02:09.288 CXX test/cpp_headers/trace.o 00:02:09.288 CXX test/cpp_headers/trace_parser.o 00:02:09.288 LINK iscsi_tgt 00:02:09.288 CXX test/cpp_headers/tree.o 00:02:09.288 CXX test/cpp_headers/ublk.o 00:02:09.288 CXX test/cpp_headers/util.o 00:02:09.288 CXX test/cpp_headers/uuid.o 00:02:09.288 CXX test/cpp_headers/version.o 00:02:09.288 CXX test/cpp_headers/vfio_user_pci.o 00:02:09.288 CXX test/cpp_headers/vfio_user_spec.o 00:02:09.288 CXX test/cpp_headers/vhost.o 00:02:09.288 CXX test/cpp_headers/vmd.o 00:02:09.288 CXX test/cpp_headers/xor.o 00:02:09.288 CXX test/cpp_headers/zipf.o 00:02:09.288 LINK env_dpdk_post_init 00:02:09.547 fio_plugin.c:1584:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:09.547 struct spdk_nvme_fdp_ruhs ruhs; 00:02:09.547 ^ 00:02:09.547 LINK stub 00:02:09.547 LINK spdk_tgt 00:02:09.547 LINK verify 00:02:09.547 LINK bdev_svc 00:02:09.547 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:09.547 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:09.547 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:09.547 LINK ioat_perf 00:02:09.547 LINK spdk_trace 00:02:09.547 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:09.547 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:09.547 LINK test_dma 00:02:09.547 LINK spdk_dd 00:02:09.805 LINK nvme_fuzz 00:02:09.805 LINK pci_ut 00:02:09.805 LINK llvm_vfio_fuzz 00:02:09.805 LINK spdk_bdev 00:02:09.805 1 warning generated. 00:02:09.805 LINK mem_callbacks 00:02:09.805 LINK spdk_nvme_identify 00:02:09.805 LINK spdk_nvme 00:02:09.805 LINK vhost_fuzz 00:02:10.063 LINK spdk_nvme_perf 00:02:10.063 CC app/vhost/vhost.o 00:02:10.063 LINK spdk_top 00:02:10.063 LINK llvm_nvme_fuzz 00:02:10.063 CC examples/vmd/lsvmd/lsvmd.o 00:02:10.063 CC examples/idxd/perf/perf.o 00:02:10.063 CC examples/sock/hello_world/hello_sock.o 00:02:10.063 CC examples/vmd/led/led.o 00:02:10.320 CC examples/thread/thread/thread_ex.o 00:02:10.320 LINK vhost 00:02:10.320 LINK lsvmd 00:02:10.320 LINK led 00:02:10.320 LINK memory_ut 00:02:10.320 LINK hello_sock 00:02:10.320 LINK idxd_perf 00:02:10.320 LINK thread 00:02:10.578 LINK spdk_lock 00:02:10.837 LINK iscsi_fuzz 00:02:11.094 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:11.094 CC examples/nvme/reconnect/reconnect.o 00:02:11.094 CC examples/nvme/hello_world/hello_world.o 00:02:11.094 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:11.094 CC examples/nvme/hotplug/hotplug.o 00:02:11.094 CC examples/nvme/arbitration/arbitration.o 00:02:11.094 CC examples/nvme/abort/abort.o 00:02:11.094 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:11.352 CC test/event/reactor_perf/reactor_perf.o 00:02:11.352 CC test/event/reactor/reactor.o 00:02:11.352 CC test/event/event_perf/event_perf.o 00:02:11.352 LINK pmr_persistence 00:02:11.352 CC test/event/app_repeat/app_repeat.o 00:02:11.352 LINK cmb_copy 00:02:11.352 CC test/event/scheduler/scheduler.o 00:02:11.352 LINK hello_world 00:02:11.352 LINK hotplug 00:02:11.352 LINK reconnect 00:02:11.352 LINK reactor_perf 00:02:11.352 LINK reactor 00:02:11.352 LINK event_perf 00:02:11.352 LINK arbitration 00:02:11.352 LINK abort 00:02:11.352 LINK app_repeat 00:02:11.610 LINK nvme_manage 00:02:11.610 LINK scheduler 00:02:11.610 CC test/nvme/err_injection/err_injection.o 00:02:11.610 CC test/nvme/fused_ordering/fused_ordering.o 00:02:11.610 CC test/nvme/reset/reset.o 00:02:11.611 CC test/nvme/reserve/reserve.o 00:02:11.611 CC test/nvme/e2edp/nvme_dp.o 00:02:11.611 CC test/nvme/connect_stress/connect_stress.o 00:02:11.611 CC test/nvme/aer/aer.o 00:02:11.611 CC test/nvme/compliance/nvme_compliance.o 00:02:11.611 CC test/nvme/sgl/sgl.o 00:02:11.611 CC test/nvme/overhead/overhead.o 00:02:11.611 CC test/nvme/simple_copy/simple_copy.o 00:02:11.611 CC test/nvme/boot_partition/boot_partition.o 00:02:11.611 CC test/nvme/startup/startup.o 00:02:11.611 CC test/nvme/cuse/cuse.o 00:02:11.611 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:11.611 CC test/nvme/fdp/fdp.o 00:02:11.611 CC test/accel/dif/dif.o 00:02:11.611 CC test/blobfs/mkfs/mkfs.o 00:02:11.611 CC test/lvol/esnap/esnap.o 00:02:11.611 LINK boot_partition 00:02:11.611 LINK connect_stress 00:02:11.611 LINK startup 00:02:11.611 LINK err_injection 00:02:11.611 LINK fused_ordering 00:02:11.611 LINK reserve 00:02:11.611 LINK doorbell_aers 00:02:11.869 LINK simple_copy 00:02:11.869 LINK reset 00:02:11.869 LINK nvme_dp 00:02:11.869 LINK aer 00:02:11.869 LINK sgl 00:02:11.869 LINK mkfs 00:02:11.869 LINK overhead 00:02:11.869 LINK fdp 00:02:11.869 LINK nvme_compliance 00:02:11.869 LINK dif 00:02:12.127 CC examples/accel/perf/accel_perf.o 00:02:12.386 CC examples/blob/cli/blobcli.o 00:02:12.386 CC examples/blob/hello_world/hello_blob.o 00:02:12.386 LINK hello_blob 00:02:12.645 LINK accel_perf 00:02:12.645 LINK cuse 00:02:12.645 LINK blobcli 00:02:13.583 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.583 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.583 LINK hello_bdev 00:02:13.583 CC test/bdev/bdevio/bdevio.o 00:02:13.842 LINK bdevperf 00:02:14.101 LINK bdevio 00:02:15.477 CC examples/nvmf/nvmf/nvmf.o 00:02:15.477 LINK esnap 00:02:15.477 LINK nvmf 00:02:16.854 00:02:16.854 real 0m44.019s 00:02:16.854 user 6m29.709s 00:02:16.854 sys 2m6.009s 00:02:16.854 08:15:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:16.854 08:15:29 make -- common/autotest_common.sh@10 -- $ set +x 00:02:16.854 ************************************ 00:02:16.854 END TEST make 00:02:16.854 ************************************ 00:02:16.854 08:15:29 -- common/autotest_common.sh@1142 -- $ return 0 00:02:16.854 08:15:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:16.854 08:15:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:16.854 08:15:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:16.854 08:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.854 08:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:16.854 08:15:29 -- pm/common@44 -- $ pid=471951 00:02:16.854 08:15:29 -- pm/common@50 -- $ kill -TERM 471951 00:02:16.854 08:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.855 08:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:16.855 08:15:29 -- pm/common@44 -- $ pid=471953 00:02:16.855 08:15:29 -- pm/common@50 -- $ kill -TERM 471953 00:02:16.855 08:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.855 08:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:16.855 08:15:29 -- pm/common@44 -- $ pid=471955 00:02:16.855 08:15:29 -- pm/common@50 -- $ kill -TERM 471955 00:02:16.855 08:15:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.855 08:15:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:16.855 08:15:29 -- pm/common@44 -- $ pid=471978 00:02:16.855 08:15:29 -- pm/common@50 -- $ sudo -E kill -TERM 471978 00:02:17.114 08:15:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.114 08:15:29 -- nvmf/common.sh@7 -- # uname -s 00:02:17.114 08:15:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.114 08:15:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.114 08:15:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.114 08:15:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.114 08:15:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.114 08:15:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.114 08:15:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.114 08:15:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.114 08:15:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.114 08:15:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.115 08:15:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:02:17.115 08:15:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:02:17.115 08:15:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.115 08:15:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.115 08:15:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:17.115 08:15:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.115 08:15:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:17.115 08:15:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.115 08:15:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.115 08:15:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.115 08:15:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.115 08:15:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.115 08:15:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.115 08:15:29 -- paths/export.sh@5 -- # export PATH 00:02:17.115 08:15:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.115 08:15:29 -- nvmf/common.sh@47 -- # : 0 00:02:17.115 08:15:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:17.115 08:15:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:17.115 08:15:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.115 08:15:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.115 08:15:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.115 08:15:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:17.115 08:15:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:17.115 08:15:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:17.115 08:15:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.115 08:15:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.115 08:15:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.115 08:15:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.115 08:15:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:17.115 08:15:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.115 08:15:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:17.115 08:15:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.115 08:15:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.115 08:15:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.115 08:15:29 -- spdk/autotest.sh@48 -- # udevadm_pid=532412 00:02:17.115 08:15:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.115 08:15:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.115 08:15:29 -- pm/common@17 -- # local monitor 00:02:17.115 08:15:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.115 08:15:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.115 08:15:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.115 08:15:29 -- pm/common@21 -- # date +%s 00:02:17.115 08:15:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.115 08:15:29 -- pm/common@21 -- # date +%s 00:02:17.115 08:15:29 -- pm/common@25 -- # sleep 1 00:02:17.115 08:15:29 -- pm/common@21 -- # date +%s 00:02:17.115 08:15:29 -- pm/common@21 -- # date +%s 00:02:17.115 08:15:29 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715329 00:02:17.115 08:15:29 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715329 00:02:17.115 08:15:29 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715329 00:02:17.115 08:15:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715329 00:02:17.115 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715329_collect-vmstat.pm.log 00:02:17.115 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715329_collect-cpu-load.pm.log 00:02:17.115 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715329_collect-cpu-temp.pm.log 00:02:17.115 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721715329_collect-bmc-pm.bmc.pm.log 00:02:18.052 08:15:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.052 08:15:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.052 08:15:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.052 08:15:30 -- common/autotest_common.sh@10 -- # set +x 00:02:18.052 08:15:30 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.052 08:15:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:18.052 08:15:30 -- common/autotest_common.sh@10 -- # set +x 00:02:18.052 08:15:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:18.052 08:15:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.052 08:15:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.052 08:15:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:18.052 08:15:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:18.052 08:15:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.052 08:15:30 -- common/autotest_common.sh@1455 -- # uname 00:02:18.052 08:15:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:18.052 08:15:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.052 08:15:30 -- common/autotest_common.sh@1475 -- # uname 00:02:18.052 08:15:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:18.052 08:15:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.052 08:15:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:02:18.052 08:15:30 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.052 08:15:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:02:18.053 08:15:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:18.053 08:15:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.053 08:15:30 -- common/autotest_common.sh@10 -- # set +x 00:02:18.053 08:15:30 -- spdk/autotest.sh@91 -- # rm -f 00:02:18.053 08:15:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:22.248 0000:dd:00.0 (8086 0a54): Already using the nvme driver 00:02:22.248 0000:df:00.0 (8086 0a54): Already using the nvme driver 00:02:22.248 0000:de:00.0 (8086 0953): Already using the nvme driver 00:02:22.248 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.248 0000:dc:00.0 (8086 0953): Already using the nvme driver 00:02:23.624 08:15:35 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:23.624 08:15:35 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:23.624 08:15:35 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:23.624 08:15:35 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:23.624 08:15:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.624 08:15:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.624 08:15:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.624 08:15:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:23.624 08:15:35 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:02:23.624 08:15:35 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:23.624 08:15:35 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:23.624 08:15:35 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:23.624 08:15:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.624 08:15:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.624 08:15:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:23.624 08:15:35 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:23.624 08:15:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:23.624 No valid GPT data, bailing 00:02:23.624 08:15:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:23.624 08:15:35 -- scripts/common.sh@391 -- # pt= 00:02:23.624 08:15:35 -- scripts/common.sh@392 -- # return 1 00:02:23.624 08:15:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:23.624 1+0 records in 00:02:23.624 1+0 records out 00:02:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460109 s, 228 MB/s 00:02:23.624 08:15:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.624 08:15:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.624 08:15:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:23.624 08:15:35 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:23.624 08:15:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:23.624 No valid GPT data, bailing 00:02:23.624 08:15:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:23.624 08:15:35 -- scripts/common.sh@391 -- # pt= 00:02:23.624 08:15:35 -- scripts/common.sh@392 -- # return 1 00:02:23.624 08:15:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:23.624 1+0 records in 00:02:23.624 1+0 records out 00:02:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616769 s, 170 MB/s 00:02:23.624 08:15:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.624 08:15:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.624 08:15:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:23.624 08:15:35 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:23.624 08:15:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:23.624 No valid GPT data, bailing 00:02:23.624 08:15:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:23.624 08:15:35 -- scripts/common.sh@391 -- # pt= 00:02:23.624 08:15:35 -- scripts/common.sh@392 -- # return 1 00:02:23.624 08:15:35 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:23.624 1+0 records in 00:02:23.624 1+0 records out 00:02:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00430081 s, 244 MB/s 00:02:23.624 08:15:35 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.624 08:15:35 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.624 08:15:35 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:02:23.624 08:15:35 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:02:23.624 08:15:35 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:02:23.624 No valid GPT data, bailing 00:02:23.624 08:15:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:02:23.624 08:15:36 -- scripts/common.sh@391 -- # pt= 00:02:23.624 08:15:36 -- scripts/common.sh@392 -- # return 1 00:02:23.624 08:15:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:02:23.624 1+0 records in 00:02:23.624 1+0 records out 00:02:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00296257 s, 354 MB/s 00:02:23.624 08:15:36 -- spdk/autotest.sh@118 -- # sync 00:02:23.624 08:15:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:23.624 08:15:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:23.624 08:15:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:28.894 08:15:40 -- spdk/autotest.sh@124 -- # uname -s 00:02:28.894 08:15:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:28.894 08:15:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.894 08:15:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:28.894 08:15:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:28.894 08:15:40 -- common/autotest_common.sh@10 -- # set +x 00:02:28.894 ************************************ 00:02:28.894 START TEST setup.sh 00:02:28.894 ************************************ 00:02:28.894 08:15:40 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:28.894 * Looking for test storage... 00:02:28.894 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:28.895 08:15:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:28.895 08:15:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:28.895 08:15:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:28.895 08:15:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:28.895 08:15:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:28.895 08:15:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:28.895 ************************************ 00:02:28.895 START TEST acl 00:02:28.895 ************************************ 00:02:28.895 08:15:40 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:28.895 * Looking for test storage... 00:02:28.895 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:28.895 08:15:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:28.895 08:15:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:28.895 08:15:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:02:28.895 08:15:41 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:28.895 08:15:41 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:28.895 08:15:41 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:28.895 08:15:41 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:28.895 08:15:41 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:28.895 08:15:41 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:28.895 08:15:41 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:28.895 08:15:41 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:34.169 08:15:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:34.169 08:15:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:34.169 08:15:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.169 08:15:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:34.169 08:15:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.169 08:15:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:37.464 Hugepages 00:02:37.464 node hugesize free / total 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 00:02:37.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dc:00.0 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:dd:00.0 == *:*:*.* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:37.464 08:15:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:de:00.0 == *:*:*.* ]] 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:df:00.0 == *:*:*.* ]] 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:02:37.724 08:15:50 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:37.724 08:15:50 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:37.724 08:15:50 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.724 08:15:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:37.724 ************************************ 00:02:37.724 START TEST denied 00:02:37.724 ************************************ 00:02:37.724 08:15:50 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:37.724 08:15:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:dc:00.0' 00:02:37.724 08:15:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:37.724 08:15:50 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:dc:00.0' 00:02:37.724 08:15:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.724 08:15:50 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:02:45.849 0000:dc:00.0 (8086 0953): Skipping denied controller at 0000:dc:00.0 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:dc:00.0 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dc:00.0 ]] 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dc:00.0/driver 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.849 08:15:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.974 00:02:53.974 real 0m15.822s 00:02:53.974 user 0m3.736s 00:02:53.974 sys 0m6.747s 00:02:53.974 08:16:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.974 08:16:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:53.974 ************************************ 00:02:53.974 END TEST denied 00:02:53.974 ************************************ 00:02:53.974 08:16:06 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:53.974 08:16:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:53.974 08:16:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.974 08:16:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.974 08:16:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.974 ************************************ 00:02:53.974 START TEST allowed 00:02:53.974 ************************************ 00:02:53.974 08:16:06 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:53.974 08:16:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:dc:00.0 00:02:53.974 08:16:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:53.974 08:16:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:dc:00.0 .*: nvme -> .*' 00:02:53.974 08:16:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.974 08:16:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:02.095 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:dd:00.0 ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:dd:00.0/driver 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:de:00.0 ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:de:00.0/driver 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:df:00.0 ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:df:00.0/driver 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.095 08:16:13 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.665 00:03:08.665 real 0m14.893s 00:03:08.665 user 0m3.756s 00:03:08.665 sys 0m6.505s 00:03:08.665 08:16:20 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.665 08:16:20 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:08.665 ************************************ 00:03:08.665 END TEST allowed 00:03:08.665 ************************************ 00:03:08.665 08:16:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:08.665 00:03:08.665 real 0m40.117s 00:03:08.665 user 0m10.976s 00:03:08.665 sys 0m19.351s 00:03:08.665 08:16:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.665 08:16:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:08.665 ************************************ 00:03:08.665 END TEST acl 00:03:08.665 ************************************ 00:03:08.665 08:16:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:08.665 08:16:21 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:08.665 08:16:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.665 08:16:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.665 08:16:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:08.665 ************************************ 00:03:08.665 START TEST hugepages 00:03:08.665 ************************************ 00:03:08.665 08:16:21 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:08.665 * Looking for test storage... 00:03:08.665 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:08.925 08:16:21 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 40966020 kB' 'MemAvailable: 44770408 kB' 'Buffers: 12164 kB' 'Cached: 11927016 kB' 'SwapCached: 0 kB' 'Active: 8879544 kB' 'Inactive: 3619028 kB' 'Active(anon): 8486940 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563232 kB' 'Mapped: 174360 kB' 'Shmem: 7927548 kB' 'KReclaimable: 440476 kB' 'Slab: 876132 kB' 'SReclaimable: 440476 kB' 'SUnreclaim: 435656 kB' 'KernelStack: 18992 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36438948 kB' 'Committed_AS: 9895252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207848 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.926 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.927 08:16:21 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:08.927 08:16:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.928 08:16:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.928 08:16:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.928 ************************************ 00:03:08.928 START TEST default_setup 00:03:08.928 ************************************ 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.928 08:16:21 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:13.123 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:13.123 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.503 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.503 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.503 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:03:14.762 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43181092 kB' 'MemAvailable: 46982488 kB' 'Buffers: 12164 kB' 'Cached: 11927160 kB' 'SwapCached: 0 kB' 'Active: 8896548 kB' 'Inactive: 3619028 kB' 'Active(anon): 8503944 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579760 kB' 'Mapped: 174672 kB' 'Shmem: 7927692 kB' 'KReclaimable: 437484 kB' 'Slab: 868488 kB' 'SReclaimable: 437484 kB' 'SUnreclaim: 431004 kB' 'KernelStack: 19312 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9928256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208184 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.140 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.141 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43181268 kB' 'MemAvailable: 46982664 kB' 'Buffers: 12164 kB' 'Cached: 11927164 kB' 'SwapCached: 0 kB' 'Active: 8897460 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504856 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580544 kB' 'Mapped: 174748 kB' 'Shmem: 7927696 kB' 'KReclaimable: 437484 kB' 'Slab: 868552 kB' 'SReclaimable: 437484 kB' 'SUnreclaim: 431068 kB' 'KernelStack: 19440 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9925536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208120 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.142 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43182176 kB' 'MemAvailable: 46983572 kB' 'Buffers: 12164 kB' 'Cached: 11927184 kB' 'SwapCached: 0 kB' 'Active: 8897148 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504544 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580056 kB' 'Mapped: 174660 kB' 'Shmem: 7927716 kB' 'KReclaimable: 437484 kB' 'Slab: 868512 kB' 'SReclaimable: 437484 kB' 'SUnreclaim: 431028 kB' 'KernelStack: 19328 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9934948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208120 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.143 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.144 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.145 nr_hugepages=1024 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.145 resv_hugepages=0 00:03:16.145 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.145 surplus_hugepages=0 00:03:16.406 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.406 anon_hugepages=0 00:03:16.406 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.406 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43179508 kB' 'MemAvailable: 46980904 kB' 'Buffers: 12164 kB' 'Cached: 11927200 kB' 'SwapCached: 0 kB' 'Active: 8897212 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504608 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579548 kB' 'Mapped: 174656 kB' 'Shmem: 7927732 kB' 'KReclaimable: 437484 kB' 'Slab: 868512 kB' 'SReclaimable: 437484 kB' 'SUnreclaim: 431028 kB' 'KernelStack: 19392 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9923888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208152 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.407 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.408 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 25497324 kB' 'MemUsed: 7138328 kB' 'SwapCached: 0 kB' 'Active: 3413072 kB' 'Inactive: 146984 kB' 'Active(anon): 3125776 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196520 kB' 'Mapped: 127596 kB' 'AnonPages: 366168 kB' 'Shmem: 2762240 kB' 'KernelStack: 11128 kB' 'PageTables: 5476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 143872 kB' 'Slab: 389140 kB' 'SReclaimable: 143872 kB' 'SUnreclaim: 245268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.409 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.410 node0=1024 expecting 1024 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.410 00:03:16.410 real 0m7.452s 00:03:16.410 user 0m2.052s 00:03:16.410 sys 0m3.325s 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.410 08:16:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:16.410 ************************************ 00:03:16.410 END TEST default_setup 00:03:16.410 ************************************ 00:03:16.410 08:16:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:16.410 08:16:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:16.410 08:16:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.410 08:16:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.410 08:16:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.410 ************************************ 00:03:16.410 START TEST per_node_1G_alloc 00:03:16.410 ************************************ 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.410 08:16:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:19.699 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.699 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.699 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:19.699 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.699 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43154844 kB' 'MemAvailable: 46956496 kB' 'Buffers: 12164 kB' 'Cached: 11927344 kB' 'SwapCached: 0 kB' 'Active: 8893836 kB' 'Inactive: 3619028 kB' 'Active(anon): 8501232 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576212 kB' 'Mapped: 173684 kB' 'Shmem: 7927876 kB' 'KReclaimable: 437740 kB' 'Slab: 868472 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430732 kB' 'KernelStack: 19072 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9903336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208008 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.121 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43156720 kB' 'MemAvailable: 46958372 kB' 'Buffers: 12164 kB' 'Cached: 11927344 kB' 'SwapCached: 0 kB' 'Active: 8893096 kB' 'Inactive: 3619028 kB' 'Active(anon): 8500492 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575912 kB' 'Mapped: 173576 kB' 'Shmem: 7927876 kB' 'KReclaimable: 437740 kB' 'Slab: 868448 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430708 kB' 'KernelStack: 19072 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9903352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207976 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.122 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.123 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43157040 kB' 'MemAvailable: 46958692 kB' 'Buffers: 12164 kB' 'Cached: 11927364 kB' 'SwapCached: 0 kB' 'Active: 8893164 kB' 'Inactive: 3619028 kB' 'Active(anon): 8500560 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575920 kB' 'Mapped: 173576 kB' 'Shmem: 7927896 kB' 'KReclaimable: 437740 kB' 'Slab: 868448 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430708 kB' 'KernelStack: 19072 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9903376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207976 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.124 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.125 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.126 nr_hugepages=1024 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.126 resv_hugepages=0 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.126 surplus_hugepages=0 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.126 anon_hugepages=0 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43157464 kB' 'MemAvailable: 46959116 kB' 'Buffers: 12164 kB' 'Cached: 11927404 kB' 'SwapCached: 0 kB' 'Active: 8892772 kB' 'Inactive: 3619028 kB' 'Active(anon): 8500168 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575480 kB' 'Mapped: 173576 kB' 'Shmem: 7927936 kB' 'KReclaimable: 437740 kB' 'Slab: 868448 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430708 kB' 'KernelStack: 19040 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9903400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 207976 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.126 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.127 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26526256 kB' 'MemUsed: 6109396 kB' 'SwapCached: 0 kB' 'Active: 3411224 kB' 'Inactive: 146984 kB' 'Active(anon): 3123928 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196604 kB' 'Mapped: 127168 kB' 'AnonPages: 364700 kB' 'Shmem: 2762324 kB' 'KernelStack: 10936 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144160 kB' 'Slab: 389032 kB' 'SReclaimable: 144160 kB' 'SUnreclaim: 244872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.128 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.129 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659344 kB' 'MemFree: 16631712 kB' 'MemUsed: 11027632 kB' 'SwapCached: 0 kB' 'Active: 5482376 kB' 'Inactive: 3472044 kB' 'Active(anon): 5377068 kB' 'Inactive(anon): 0 kB' 'Active(file): 105308 kB' 'Inactive(file): 3472044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8742988 kB' 'Mapped: 46408 kB' 'AnonPages: 211560 kB' 'Shmem: 5165636 kB' 'KernelStack: 8152 kB' 'PageTables: 3228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293580 kB' 'Slab: 479416 kB' 'SReclaimable: 293580 kB' 'SUnreclaim: 185836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.130 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.131 node0=512 expecting 512 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:21.131 node1=512 expecting 512 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:21.131 00:03:21.131 real 0m4.808s 00:03:21.131 user 0m1.877s 00:03:21.131 sys 0m3.000s 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.131 08:16:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:21.131 ************************************ 00:03:21.131 END TEST per_node_1G_alloc 00:03:21.131 ************************************ 00:03:21.131 08:16:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:21.131 08:16:33 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:21.131 08:16:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.131 08:16:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.131 08:16:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.463 ************************************ 00:03:21.463 START TEST even_2G_alloc 00:03:21.463 ************************************ 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.463 08:16:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:24.789 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.789 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.789 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:24.789 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.789 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43172940 kB' 'MemAvailable: 46974592 kB' 'Buffers: 12164 kB' 'Cached: 11927524 kB' 'SwapCached: 0 kB' 'Active: 8896400 kB' 'Inactive: 3619028 kB' 'Active(anon): 8503796 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578520 kB' 'Mapped: 173752 kB' 'Shmem: 7928056 kB' 'KReclaimable: 437740 kB' 'Slab: 869248 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431508 kB' 'KernelStack: 19200 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9906784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208152 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.174 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.175 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43169860 kB' 'MemAvailable: 46971512 kB' 'Buffers: 12164 kB' 'Cached: 11927528 kB' 'SwapCached: 0 kB' 'Active: 8894288 kB' 'Inactive: 3619028 kB' 'Active(anon): 8501684 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576796 kB' 'Mapped: 173672 kB' 'Shmem: 7928060 kB' 'KReclaimable: 437740 kB' 'Slab: 869160 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431420 kB' 'KernelStack: 19056 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9906800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208104 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.176 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.177 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43169144 kB' 'MemAvailable: 46970796 kB' 'Buffers: 12164 kB' 'Cached: 11927544 kB' 'SwapCached: 0 kB' 'Active: 8894752 kB' 'Inactive: 3619028 kB' 'Active(anon): 8502148 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577260 kB' 'Mapped: 173672 kB' 'Shmem: 7928076 kB' 'KReclaimable: 437740 kB' 'Slab: 869000 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431260 kB' 'KernelStack: 19264 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9906576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208232 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.178 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.179 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.180 nr_hugepages=1024 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.180 resv_hugepages=0 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.180 surplus_hugepages=0 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.180 anon_hugepages=0 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43165808 kB' 'MemAvailable: 46967460 kB' 'Buffers: 12164 kB' 'Cached: 11927564 kB' 'SwapCached: 0 kB' 'Active: 8894736 kB' 'Inactive: 3619028 kB' 'Active(anon): 8502132 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577204 kB' 'Mapped: 173672 kB' 'Shmem: 7928096 kB' 'KReclaimable: 437740 kB' 'Slab: 869000 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431260 kB' 'KernelStack: 19216 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9904236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208136 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.180 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.181 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26539844 kB' 'MemUsed: 6095808 kB' 'SwapCached: 0 kB' 'Active: 3413144 kB' 'Inactive: 146984 kB' 'Active(anon): 3125848 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196616 kB' 'Mapped: 127232 kB' 'AnonPages: 366676 kB' 'Shmem: 2762336 kB' 'KernelStack: 11000 kB' 'PageTables: 4700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144160 kB' 'Slab: 389336 kB' 'SReclaimable: 144160 kB' 'SUnreclaim: 245176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.182 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659344 kB' 'MemFree: 16625800 kB' 'MemUsed: 11033544 kB' 'SwapCached: 0 kB' 'Active: 5481648 kB' 'Inactive: 3472044 kB' 'Active(anon): 5376340 kB' 'Inactive(anon): 0 kB' 'Active(file): 105308 kB' 'Inactive(file): 3472044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8743156 kB' 'Mapped: 46440 kB' 'AnonPages: 210620 kB' 'Shmem: 5165804 kB' 'KernelStack: 8088 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293580 kB' 'Slab: 479664 kB' 'SReclaimable: 293580 kB' 'SUnreclaim: 186084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.183 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.184 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.185 node0=512 expecting 512 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.185 node1=512 expecting 512 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.185 00:03:26.185 real 0m4.862s 00:03:26.185 user 0m1.824s 00:03:26.185 sys 0m3.101s 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.185 08:16:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.185 ************************************ 00:03:26.185 END TEST even_2G_alloc 00:03:26.185 ************************************ 00:03:26.185 08:16:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.185 08:16:38 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:26.185 08:16:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.185 08:16:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.185 08:16:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.185 ************************************ 00:03:26.185 START TEST odd_alloc 00:03:26.185 ************************************ 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.185 08:16:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:29.475 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.475 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.475 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:29.475 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.475 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.852 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43169664 kB' 'MemAvailable: 46971316 kB' 'Buffers: 12164 kB' 'Cached: 11927716 kB' 'SwapCached: 0 kB' 'Active: 8897180 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504576 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579656 kB' 'Mapped: 173732 kB' 'Shmem: 7928248 kB' 'KReclaimable: 437740 kB' 'Slab: 868772 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431032 kB' 'KernelStack: 19056 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486500 kB' 'Committed_AS: 9905008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208040 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.853 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43169908 kB' 'MemAvailable: 46971560 kB' 'Buffers: 12164 kB' 'Cached: 11927720 kB' 'SwapCached: 0 kB' 'Active: 8896972 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504368 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579464 kB' 'Mapped: 173704 kB' 'Shmem: 7928252 kB' 'KReclaimable: 437740 kB' 'Slab: 868788 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431048 kB' 'KernelStack: 19088 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486500 kB' 'Committed_AS: 9905024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208008 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.854 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43170912 kB' 'MemAvailable: 46972564 kB' 'Buffers: 12164 kB' 'Cached: 11927736 kB' 'SwapCached: 0 kB' 'Active: 8896988 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504384 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579456 kB' 'Mapped: 173704 kB' 'Shmem: 7928268 kB' 'KReclaimable: 437740 kB' 'Slab: 868788 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431048 kB' 'KernelStack: 19088 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486500 kB' 'Committed_AS: 9905044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208008 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:30.856 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.120 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.121 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:31.122 nr_hugepages=1025 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.122 resv_hugepages=0 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.122 surplus_hugepages=0 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.122 anon_hugepages=0 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43172612 kB' 'MemAvailable: 46974264 kB' 'Buffers: 12164 kB' 'Cached: 11927756 kB' 'SwapCached: 0 kB' 'Active: 8896996 kB' 'Inactive: 3619028 kB' 'Active(anon): 8504392 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619028 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579460 kB' 'Mapped: 173704 kB' 'Shmem: 7928288 kB' 'KReclaimable: 437740 kB' 'Slab: 868788 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 431048 kB' 'KernelStack: 19088 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486500 kB' 'Committed_AS: 9905068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208008 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.122 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.123 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26534476 kB' 'MemUsed: 6101176 kB' 'SwapCached: 0 kB' 'Active: 3413888 kB' 'Inactive: 146984 kB' 'Active(anon): 3126592 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196660 kB' 'Mapped: 127292 kB' 'AnonPages: 367424 kB' 'Shmem: 2762380 kB' 'KernelStack: 11000 kB' 'PageTables: 4964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144096 kB' 'Slab: 388912 kB' 'SReclaimable: 144096 kB' 'SUnreclaim: 244816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659344 kB' 'MemFree: 16638388 kB' 'MemUsed: 11020956 kB' 'SwapCached: 0 kB' 'Active: 5483136 kB' 'Inactive: 3472044 kB' 'Active(anon): 5377828 kB' 'Inactive(anon): 0 kB' 'Active(file): 105308 kB' 'Inactive(file): 3472044 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8743300 kB' 'Mapped: 46412 kB' 'AnonPages: 212032 kB' 'Shmem: 5165948 kB' 'KernelStack: 8088 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293644 kB' 'Slab: 479876 kB' 'SReclaimable: 293644 kB' 'SUnreclaim: 186232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:31.128 node0=512 expecting 513 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:31.128 node1=513 expecting 512 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:31.128 00:03:31.128 real 0m4.902s 00:03:31.128 user 0m1.880s 00:03:31.128 sys 0m3.098s 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.128 08:16:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.128 ************************************ 00:03:31.128 END TEST odd_alloc 00:03:31.128 ************************************ 00:03:31.128 08:16:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:31.128 08:16:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:31.128 08:16:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.128 08:16:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.128 08:16:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.128 ************************************ 00:03:31.128 START TEST custom_alloc 00:03:31.128 ************************************ 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.128 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.129 08:16:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:34.417 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.417 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.417 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:34.417 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.417 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 42133080 kB' 'MemAvailable: 45934756 kB' 'Buffers: 12164 kB' 'Cached: 11927924 kB' 'SwapCached: 0 kB' 'Active: 8898084 kB' 'Inactive: 3619052 kB' 'Active(anon): 8505480 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579856 kB' 'Mapped: 173876 kB' 'Shmem: 7928432 kB' 'KReclaimable: 437740 kB' 'Slab: 868356 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430616 kB' 'KernelStack: 19104 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963236 kB' 'Committed_AS: 9905708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.797 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.798 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 42133424 kB' 'MemAvailable: 45935100 kB' 'Buffers: 12164 kB' 'Cached: 11927928 kB' 'SwapCached: 0 kB' 'Active: 8897816 kB' 'Inactive: 3619052 kB' 'Active(anon): 8505212 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580076 kB' 'Mapped: 173780 kB' 'Shmem: 7928436 kB' 'KReclaimable: 437740 kB' 'Slab: 868288 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430548 kB' 'KernelStack: 19104 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963236 kB' 'Committed_AS: 9905724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208056 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.799 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.800 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 42133744 kB' 'MemAvailable: 45935420 kB' 'Buffers: 12164 kB' 'Cached: 11927944 kB' 'SwapCached: 0 kB' 'Active: 8897836 kB' 'Inactive: 3619052 kB' 'Active(anon): 8505232 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580076 kB' 'Mapped: 173780 kB' 'Shmem: 7928452 kB' 'KReclaimable: 437740 kB' 'Slab: 868288 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430548 kB' 'KernelStack: 19104 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963236 kB' 'Committed_AS: 9905748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208056 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.801 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.802 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:36.065 nr_hugepages=1536 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.065 resv_hugepages=0 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.065 surplus_hugepages=0 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.065 anon_hugepages=0 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 42135016 kB' 'MemAvailable: 45936692 kB' 'Buffers: 12164 kB' 'Cached: 11927944 kB' 'SwapCached: 0 kB' 'Active: 8897804 kB' 'Inactive: 3619052 kB' 'Active(anon): 8505200 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580044 kB' 'Mapped: 173780 kB' 'Shmem: 7928452 kB' 'KReclaimable: 437740 kB' 'Slab: 868288 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430548 kB' 'KernelStack: 19088 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963236 kB' 'Committed_AS: 9905768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208056 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.065 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.066 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 26553120 kB' 'MemUsed: 6082532 kB' 'SwapCached: 0 kB' 'Active: 3413320 kB' 'Inactive: 146984 kB' 'Active(anon): 3126024 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196692 kB' 'Mapped: 127360 kB' 'AnonPages: 366720 kB' 'Shmem: 2762412 kB' 'KernelStack: 10968 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144096 kB' 'Slab: 388468 kB' 'SReclaimable: 144096 kB' 'SUnreclaim: 244372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.067 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27659344 kB' 'MemFree: 15583652 kB' 'MemUsed: 12075692 kB' 'SwapCached: 0 kB' 'Active: 5484544 kB' 'Inactive: 3472068 kB' 'Active(anon): 5379236 kB' 'Inactive(anon): 0 kB' 'Active(file): 105308 kB' 'Inactive(file): 3472068 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8743480 kB' 'Mapped: 46420 kB' 'AnonPages: 213332 kB' 'Shmem: 5166104 kB' 'KernelStack: 8120 kB' 'PageTables: 3180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293644 kB' 'Slab: 479820 kB' 'SReclaimable: 293644 kB' 'SUnreclaim: 186176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.068 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.069 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.070 node0=512 expecting 512 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:36.070 node1=1024 expecting 1024 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:36.070 00:03:36.070 real 0m4.855s 00:03:36.070 user 0m1.910s 00:03:36.070 sys 0m3.014s 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.070 08:16:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.070 ************************************ 00:03:36.070 END TEST custom_alloc 00:03:36.070 ************************************ 00:03:36.070 08:16:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.070 08:16:48 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.070 08:16:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.070 08:16:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.070 08:16:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.070 ************************************ 00:03:36.070 START TEST no_shrink_alloc 00:03:36.070 ************************************ 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.070 08:16:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:39.361 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.361 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.361 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:39.361 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.361 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.742 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43158896 kB' 'MemAvailable: 46960572 kB' 'Buffers: 12164 kB' 'Cached: 11928100 kB' 'SwapCached: 0 kB' 'Active: 8897152 kB' 'Inactive: 3619052 kB' 'Active(anon): 8504548 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578756 kB' 'Mapped: 173924 kB' 'Shmem: 7928608 kB' 'KReclaimable: 437740 kB' 'Slab: 868180 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430440 kB' 'KernelStack: 19120 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9906528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208104 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.743 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43160956 kB' 'MemAvailable: 46962632 kB' 'Buffers: 12164 kB' 'Cached: 11928104 kB' 'SwapCached: 0 kB' 'Active: 8897432 kB' 'Inactive: 3619052 kB' 'Active(anon): 8504828 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579580 kB' 'Mapped: 173828 kB' 'Shmem: 7928612 kB' 'KReclaimable: 437740 kB' 'Slab: 868140 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430400 kB' 'KernelStack: 19104 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9907664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.744 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.745 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43161508 kB' 'MemAvailable: 46963184 kB' 'Buffers: 12164 kB' 'Cached: 11928124 kB' 'SwapCached: 0 kB' 'Active: 8896932 kB' 'Inactive: 3619052 kB' 'Active(anon): 8504328 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578920 kB' 'Mapped: 173856 kB' 'Shmem: 7928632 kB' 'KReclaimable: 437740 kB' 'Slab: 868140 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430400 kB' 'KernelStack: 19200 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9909176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.746 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.747 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.748 nr_hugepages=1024 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.748 resv_hugepages=0 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.748 surplus_hugepages=0 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.748 anon_hugepages=0 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43161880 kB' 'MemAvailable: 46963556 kB' 'Buffers: 12164 kB' 'Cached: 11928124 kB' 'SwapCached: 0 kB' 'Active: 8897416 kB' 'Inactive: 3619052 kB' 'Active(anon): 8504812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579188 kB' 'Mapped: 173864 kB' 'Shmem: 7928632 kB' 'KReclaimable: 437740 kB' 'Slab: 868140 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430400 kB' 'KernelStack: 19216 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9909196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208120 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.748 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.749 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 25494920 kB' 'MemUsed: 7140732 kB' 'SwapCached: 0 kB' 'Active: 3412604 kB' 'Inactive: 146984 kB' 'Active(anon): 3125308 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196712 kB' 'Mapped: 127416 kB' 'AnonPages: 365984 kB' 'Shmem: 2762432 kB' 'KernelStack: 11048 kB' 'PageTables: 5304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144096 kB' 'Slab: 388364 kB' 'SReclaimable: 144096 kB' 'SUnreclaim: 244268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.750 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.751 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.009 node0=1024 expecting 1024 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.009 08:16:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:44.298 0000:dd:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.298 0000:df:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.298 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:de:00.0 (8086 0953): Already using the vfio-pci driver 00:03:44.298 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.298 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.299 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.299 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.299 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.299 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.299 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.299 0000:dc:00.0 (8086 0953): Already using the vfio-pci driver 00:03:45.681 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43161504 kB' 'MemAvailable: 46963180 kB' 'Buffers: 12164 kB' 'Cached: 11928260 kB' 'SwapCached: 0 kB' 'Active: 8897804 kB' 'Inactive: 3619052 kB' 'Active(anon): 8505200 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579120 kB' 'Mapped: 173956 kB' 'Shmem: 7928768 kB' 'KReclaimable: 437740 kB' 'Slab: 868568 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430828 kB' 'KernelStack: 19056 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9906964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208088 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.681 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.682 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43161420 kB' 'MemAvailable: 46963096 kB' 'Buffers: 12164 kB' 'Cached: 11928272 kB' 'SwapCached: 0 kB' 'Active: 8897532 kB' 'Inactive: 3619052 kB' 'Active(anon): 8504928 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579380 kB' 'Mapped: 173872 kB' 'Shmem: 7928780 kB' 'KReclaimable: 437740 kB' 'Slab: 868568 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430828 kB' 'KernelStack: 19088 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9907352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.683 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.684 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43161760 kB' 'MemAvailable: 46963436 kB' 'Buffers: 12164 kB' 'Cached: 11928288 kB' 'SwapCached: 0 kB' 'Active: 8897416 kB' 'Inactive: 3619052 kB' 'Active(anon): 8504812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579236 kB' 'Mapped: 173872 kB' 'Shmem: 7928796 kB' 'KReclaimable: 437740 kB' 'Slab: 868568 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430828 kB' 'KernelStack: 19072 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9907372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.685 08:16:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.685 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.685 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.686 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.687 nr_hugepages=1024 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.687 resv_hugepages=0 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.687 surplus_hugepages=0 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.687 anon_hugepages=0 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60294996 kB' 'MemFree: 43161508 kB' 'MemAvailable: 46963184 kB' 'Buffers: 12164 kB' 'Cached: 11928292 kB' 'SwapCached: 0 kB' 'Active: 8897780 kB' 'Inactive: 3619052 kB' 'Active(anon): 8505176 kB' 'Inactive(anon): 0 kB' 'Active(file): 392604 kB' 'Inactive(file): 3619052 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579632 kB' 'Mapped: 173872 kB' 'Shmem: 7928800 kB' 'KReclaimable: 437740 kB' 'Slab: 868568 kB' 'SReclaimable: 437740 kB' 'SUnreclaim: 430828 kB' 'KernelStack: 19088 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487524 kB' 'Committed_AS: 9907396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 208072 kB' 'VmallocChunk: 0 kB' 'Percpu: 72576 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 459732 kB' 'DirectMap2M: 8656896 kB' 'DirectMap1G: 59768832 kB' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.687 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.688 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32635652 kB' 'MemFree: 25522064 kB' 'MemUsed: 7113588 kB' 'SwapCached: 0 kB' 'Active: 3412924 kB' 'Inactive: 146984 kB' 'Active(anon): 3125628 kB' 'Inactive(anon): 0 kB' 'Active(file): 287296 kB' 'Inactive(file): 146984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3196736 kB' 'Mapped: 127460 kB' 'AnonPages: 366320 kB' 'Shmem: 2762456 kB' 'KernelStack: 10984 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 144096 kB' 'Slab: 388412 kB' 'SReclaimable: 144096 kB' 'SUnreclaim: 244316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.689 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.690 node0=1024 expecting 1024 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.690 00:03:45.690 real 0m9.593s 00:03:45.690 user 0m3.648s 00:03:45.690 sys 0m6.083s 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.690 08:16:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.690 ************************************ 00:03:45.690 END TEST no_shrink_alloc 00:03:45.690 ************************************ 00:03:45.690 08:16:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.690 08:16:58 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.690 00:03:45.690 real 0m36.997s 00:03:45.690 user 0m13.409s 00:03:45.690 sys 0m21.966s 00:03:45.690 08:16:58 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.690 08:16:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.690 ************************************ 00:03:45.690 END TEST hugepages 00:03:45.690 ************************************ 00:03:45.690 08:16:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:45.690 08:16:58 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:45.690 08:16:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.690 08:16:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.690 08:16:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.690 ************************************ 00:03:45.690 START TEST driver 00:03:45.690 ************************************ 00:03:45.690 08:16:58 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:45.949 * Looking for test storage... 00:03:45.949 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:45.949 08:16:58 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:45.949 08:16:58 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.949 08:16:58 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.159 08:17:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:58.159 08:17:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.159 08:17:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.159 08:17:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.159 ************************************ 00:03:58.159 START TEST guess_driver 00:03:58.159 ************************************ 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 198 > 0 )) 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:58.159 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:58.159 Looking for driver=vfio-pci 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.159 08:17:09 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.691 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:00.949 08:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.853 08:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.853 08:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.853 08:17:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.853 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.853 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.853 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.853 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.853 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.853 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.110 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.110 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.110 08:17:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.498 08:17:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:04.498 08:17:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:04.498 08:17:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.498 08:17:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.712 00:04:16.712 real 0m18.815s 00:04:16.712 user 0m3.610s 00:04:16.712 sys 0m6.936s 00:04:16.712 08:17:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.712 08:17:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.712 ************************************ 00:04:16.712 END TEST guess_driver 00:04:16.712 ************************************ 00:04:16.712 08:17:28 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:16.712 00:04:16.712 real 0m30.369s 00:04:16.712 user 0m5.555s 00:04:16.712 sys 0m10.476s 00:04:16.712 08:17:28 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.712 08:17:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:16.712 ************************************ 00:04:16.712 END TEST driver 00:04:16.712 ************************************ 00:04:16.712 08:17:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:16.712 08:17:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:16.712 08:17:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.712 08:17:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.712 08:17:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:16.712 ************************************ 00:04:16.712 START TEST devices 00:04:16.712 ************************************ 00:04:16.712 08:17:28 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:16.712 * Looking for test storage... 00:04:16.712 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:16.712 08:17:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:16.712 08:17:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:16.712 08:17:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.712 08:17:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:22.013 08:17:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dd:00.0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\d\:\0\0\.\0* ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.013 No valid GPT data, bailing 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dd:00.0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:df:00.0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\f\:\0\0\.\0* ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:22.013 No valid GPT data, bailing 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:df:00.0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:de:00.0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\e\:\0\0\.\0* ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:04:22.013 No valid GPT data, bailing 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.013 08:17:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:22.013 08:17:34 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:de:00.0 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:22.013 08:17:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:dc:00.0 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\c\:\0\0\.\0* ]] 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:22.014 08:17:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:22.014 08:17:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme3n1 00:04:22.014 No valid GPT data, bailing 00:04:22.014 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:22.014 08:17:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.014 08:17:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:22.014 08:17:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:22.014 08:17:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:22.014 08:17:34 setup.sh.devices -- setup/common.sh@80 -- # echo 400088457216 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 400088457216 >= min_disk_size )) 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:dc:00.0 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:22.014 08:17:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.014 08:17:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.014 08:17:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.014 08:17:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.014 ************************************ 00:04:22.014 START TEST nvme_mount 00:04:22.014 ************************************ 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.014 08:17:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.951 Creating new GPT entries in memory. 00:04:22.951 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.951 other utilities. 00:04:22.951 08:17:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.951 08:17:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.951 08:17:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.951 08:17:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.951 08:17:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:24.329 Creating new GPT entries in memory. 00:04:24.329 The operation has completed successfully. 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 570798 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.329 08:17:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.620 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:27.621 08:17:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.999 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.999 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.258 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.258 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.258 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.258 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:dd:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.258 08:17:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:32.551 08:17:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:dd:00.0 data@nvme0n1 '' '' 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.454 08:17:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.740 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:37.741 08:17:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.120 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.120 00:04:39.120 real 0m17.066s 00:04:39.120 user 0m5.511s 00:04:39.120 sys 0m9.253s 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.120 08:17:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.120 ************************************ 00:04:39.120 END TEST nvme_mount 00:04:39.120 ************************************ 00:04:39.120 08:17:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.120 08:17:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.120 08:17:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.120 08:17:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.120 08:17:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.120 ************************************ 00:04:39.120 START TEST dm_mount 00:04:39.120 ************************************ 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.120 08:17:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.056 Creating new GPT entries in memory. 00:04:40.056 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.056 other utilities. 00:04:40.056 08:17:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.056 08:17:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.056 08:17:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.056 08:17:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.056 08:17:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.434 Creating new GPT entries in memory. 00:04:41.434 The operation has completed successfully. 00:04:41.434 08:17:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.434 08:17:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.434 08:17:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.434 08:17:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.434 08:17:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.369 The operation has completed successfully. 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 576328 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:dd:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.369 08:17:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:45.656 08:17:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:dd:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:dd:00.0 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:dd:00.0 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.559 08:17:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:50.096 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dd:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.096 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:50.097 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.097 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.097 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:df:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.097 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:de:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.356 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.357 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.357 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.357 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.357 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:dc:00.0 == \0\0\0\0\:\d\d\:\0\0\.\0 ]] 00:04:50.357 08:18:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:52.287 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:52.287 00:04:52.287 real 0m12.935s 00:04:52.287 user 0m3.689s 00:04:52.287 sys 0m6.205s 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.287 08:18:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.287 ************************************ 00:04:52.287 END TEST dm_mount 00:04:52.287 ************************************ 00:04:52.287 08:18:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.287 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.287 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.287 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.287 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.287 08:18:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:52.287 00:04:52.287 real 0m36.197s 00:04:52.287 user 0m11.428s 00:04:52.287 sys 0m19.032s 00:04:52.287 08:18:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.287 08:18:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.287 ************************************ 00:04:52.287 END TEST devices 00:04:52.287 ************************************ 00:04:52.548 08:18:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.548 00:04:52.548 real 2m24.049s 00:04:52.548 user 0m41.518s 00:04:52.548 sys 1m11.070s 00:04:52.548 08:18:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.548 08:18:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.548 ************************************ 00:04:52.548 END TEST setup.sh 00:04:52.548 ************************************ 00:04:52.548 08:18:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.548 08:18:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:55.842 Hugepages 00:04:55.842 node hugesize free / total 00:04:55.842 node0 1048576kB 0 / 0 00:04:55.842 node0 2048kB 2048 / 2048 00:04:55.842 node1 1048576kB 0 / 0 00:04:55.842 node1 2048kB 0 / 0 00:04:55.842 00:04:55.842 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.842 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:55.842 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:55.842 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:55.842 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:55.842 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:55.842 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:55.842 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:56.102 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:56.102 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:56.102 NVMe 0000:dc:00.0 8086 0953 1 nvme nvme3 nvme3n1 00:04:56.102 NVMe 0000:dd:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:56.362 NVMe 0000:de:00.0 8086 0953 1 nvme nvme2 nvme2n1 00:04:56.362 NVMe 0000:df:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:04:56.362 08:18:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:56.362 08:18:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:56.362 08:18:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:56.362 08:18:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:59.707 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.707 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:59.996 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:01.927 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.927 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:05:01.927 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:05:02.185 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:05:03.562 08:18:15 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:04.499 08:18:16 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:04.499 08:18:16 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:04.499 08:18:16 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.499 08:18:16 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:04.499 08:18:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:04.499 08:18:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:04.499 08:18:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.499 08:18:16 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:04.499 08:18:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:04.499 08:18:16 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:04.499 08:18:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:05:04.499 08:18:16 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.790 Waiting for block devices as requested 00:05:07.790 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:05:08.049 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:05:08.049 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:05:11.337 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:11.337 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:11.596 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:11.596 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:11.596 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:11.596 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:11.854 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:11.854 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:11.854 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:12.113 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:05:16.297 08:18:28 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dc:00.0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # grep 0000:dc:00.0/nvme/nvme 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:00.0/0000:dc:00.0/nvme/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:05:16.297 08:18:28 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:dd:00.0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # grep 0000:dd:00.0/nvme/nvme 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:01.0/0000:dd:00.0/nvme/nvme0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:16.297 08:18:28 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:16.297 08:18:28 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:16.297 08:18:28 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:16.297 08:18:28 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1557 -- # continue 00:05:16.297 08:18:28 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:de:00.0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # grep 0000:de:00.0/nvme/nvme 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:02.0/0000:de:00.0/nvme/nvme2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:16.297 08:18:28 -- common/autotest_common.sh@1545 -- # oacs=' 0x6' 00:05:16.297 08:18:28 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1548 -- # [[ 0 -ne 0 ]] 00:05:16.297 08:18:28 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:16.297 08:18:28 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:df:00.0 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # grep 0000:df:00.0/nvme/nvme 00:05:16.297 08:18:28 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:05:16.297 08:18:28 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 ]] 00:05:16.298 08:18:28 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:02.0/0000:da:00.0/0000:db:03.0/0000:df:00.0/nvme/nvme1 00:05:16.298 08:18:28 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:16.298 08:18:28 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:16.298 08:18:28 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:16.298 08:18:28 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:16.298 08:18:28 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:16.298 08:18:28 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:16.298 08:18:28 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:16.298 08:18:28 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:16.298 08:18:28 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:16.298 08:18:28 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:16.298 08:18:28 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:16.298 08:18:28 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:16.298 08:18:28 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:16.298 08:18:28 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:16.298 08:18:28 -- common/autotest_common.sh@1557 -- # continue 00:05:16.298 08:18:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:16.298 08:18:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.298 08:18:28 -- common/autotest_common.sh@10 -- # set +x 00:05:16.298 08:18:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:16.298 08:18:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.298 08:18:28 -- common/autotest_common.sh@10 -- # set +x 00:05:16.298 08:18:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:19.584 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.584 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.584 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.584 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:19.843 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:21.748 0000:dd:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.748 0000:df:00.0 (8086 0a54): nvme -> vfio-pci 00:05:21.748 0000:de:00.0 (8086 0953): nvme -> vfio-pci 00:05:22.006 0000:dc:00.0 (8086 0953): nvme -> vfio-pci 00:05:23.382 08:18:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:23.382 08:18:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.382 08:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.382 08:18:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:23.382 08:18:35 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:23.382 08:18:35 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.382 08:18:35 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:23.382 08:18:35 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:23.382 08:18:35 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:23.382 08:18:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:23.382 08:18:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:23.382 08:18:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.382 08:18:35 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.382 08:18:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:23.642 08:18:35 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:dc:00.0 0000:dd:00.0 0000:de:00.0 0000:df:00.0 00:05:23.642 08:18:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dc:00.0/device 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # device=0x0953 00:05:23.642 08:18:35 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:05:23.642 08:18:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:dd:00.0/device 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:23.642 08:18:35 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:23.642 08:18:35 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:de:00.0/device 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # device=0x0953 00:05:23.642 08:18:35 -- common/autotest_common.sh@1581 -- # [[ 0x0953 == \0\x\0\a\5\4 ]] 00:05:23.642 08:18:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:df:00.0/device 00:05:23.642 08:18:35 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:23.642 08:18:35 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:23.642 08:18:35 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:23.642 08:18:35 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:dd:00.0 0000:df:00.0 00:05:23.642 08:18:35 -- common/autotest_common.sh@1592 -- # [[ -z 0000:dd:00.0 ]] 00:05:23.642 08:18:35 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=589738 00:05:23.642 08:18:35 -- common/autotest_common.sh@1598 -- # waitforlisten 589738 00:05:23.642 08:18:35 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.642 08:18:35 -- common/autotest_common.sh@829 -- # '[' -z 589738 ']' 00:05:23.642 08:18:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.642 08:18:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.642 08:18:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.642 08:18:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.642 08:18:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.642 [2024-07-23 08:18:35.983062] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:23.642 [2024-07-23 08:18:35.983149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589738 ] 00:05:23.642 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.642 [2024-07-23 08:18:36.099464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.901 [2024-07-23 08:18:36.259235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.468 08:18:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.468 08:18:36 -- common/autotest_common.sh@862 -- # return 0 00:05:24.468 08:18:36 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:24.468 08:18:36 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:24.468 08:18:36 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:dd:00.0 00:05:27.754 nvme0n1 00:05:27.754 08:18:39 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:27.754 [2024-07-23 08:18:40.021060] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:27.754 request: 00:05:27.754 { 00:05:27.754 "nvme_ctrlr_name": "nvme0", 00:05:27.754 "password": "test", 00:05:27.754 "method": "bdev_nvme_opal_revert", 00:05:27.754 "req_id": 1 00:05:27.754 } 00:05:27.754 Got JSON-RPC error response 00:05:27.754 response: 00:05:27.754 { 00:05:27.754 "code": -32602, 00:05:27.754 "message": "Invalid parameters" 00:05:27.754 } 00:05:27.754 08:18:40 -- common/autotest_common.sh@1604 -- # true 00:05:27.754 08:18:40 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:27.754 08:18:40 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:27.754 08:18:40 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:df:00.0 00:05:31.042 nvme1n1 00:05:31.042 08:18:43 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:05:31.042 [2024-07-23 08:18:43.256868] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:05:31.042 request: 00:05:31.042 { 00:05:31.042 "nvme_ctrlr_name": "nvme1", 00:05:31.042 "password": "test", 00:05:31.042 "method": "bdev_nvme_opal_revert", 00:05:31.042 "req_id": 1 00:05:31.042 } 00:05:31.042 Got JSON-RPC error response 00:05:31.042 response: 00:05:31.042 { 00:05:31.042 "code": -32602, 00:05:31.042 "message": "Invalid parameters" 00:05:31.042 } 00:05:31.042 08:18:43 -- common/autotest_common.sh@1604 -- # true 00:05:31.042 08:18:43 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:31.042 08:18:43 -- common/autotest_common.sh@1608 -- # killprocess 589738 00:05:31.042 08:18:43 -- common/autotest_common.sh@948 -- # '[' -z 589738 ']' 00:05:31.042 08:18:43 -- common/autotest_common.sh@952 -- # kill -0 589738 00:05:31.042 08:18:43 -- common/autotest_common.sh@953 -- # uname 00:05:31.042 08:18:43 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.042 08:18:43 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 589738 00:05:31.042 08:18:43 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.042 08:18:43 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.042 08:18:43 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 589738' 00:05:31.042 killing process with pid 589738 00:05:31.042 08:18:43 -- common/autotest_common.sh@967 -- # kill 589738 00:05:31.042 08:18:43 -- common/autotest_common.sh@972 -- # wait 589738 00:05:35.232 08:18:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:35.232 08:18:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:35.232 08:18:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.232 08:18:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:35.232 08:18:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:35.232 08:18:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.232 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:05:35.232 08:18:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:35.232 08:18:47 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:35.232 08:18:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.232 08:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.232 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:05:35.232 ************************************ 00:05:35.232 START TEST env 00:05:35.232 ************************************ 00:05:35.232 08:18:47 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:35.232 * Looking for test storage... 00:05:35.232 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:35.232 08:18:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.232 08:18:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.232 08:18:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.232 08:18:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.232 ************************************ 00:05:35.232 START TEST env_memory 00:05:35.232 ************************************ 00:05:35.232 08:18:47 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:35.232 00:05:35.232 00:05:35.232 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.232 http://cunit.sourceforge.net/ 00:05:35.232 00:05:35.232 00:05:35.232 Suite: memory 00:05:35.232 Test: alloc and free memory map ...[2024-07-23 08:18:47.610482] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.232 passed 00:05:35.232 Test: mem map translation ...[2024-07-23 08:18:47.643719] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.232 [2024-07-23 08:18:47.643746] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.233 [2024-07-23 08:18:47.643794] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.233 [2024-07-23 08:18:47.643809] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:35.233 passed 00:05:35.233 Test: mem map registration ...[2024-07-23 08:18:47.689831] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:35.233 [2024-07-23 08:18:47.689854] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:35.233 passed 00:05:35.492 Test: mem map adjacent registrations ...passed 00:05:35.492 00:05:35.492 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.492 suites 1 1 n/a 0 0 00:05:35.492 tests 4 4 4 0 0 00:05:35.492 asserts 152 152 152 0 n/a 00:05:35.492 00:05:35.492 Elapsed time = 0.175 seconds 00:05:35.492 00:05:35.492 real 0m0.202s 00:05:35.492 user 0m0.187s 00:05:35.492 sys 0m0.014s 00:05:35.492 08:18:47 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.492 08:18:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:35.492 ************************************ 00:05:35.492 END TEST env_memory 00:05:35.492 ************************************ 00:05:35.492 08:18:47 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.492 08:18:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.492 08:18:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.492 08:18:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.492 08:18:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.492 ************************************ 00:05:35.492 START TEST env_vtophys 00:05:35.492 ************************************ 00:05:35.492 08:18:47 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:35.492 EAL: lib.eal log level changed from notice to debug 00:05:35.492 EAL: Detected lcore 0 as core 0 on socket 0 00:05:35.492 EAL: Detected lcore 1 as core 1 on socket 0 00:05:35.492 EAL: Detected lcore 2 as core 2 on socket 0 00:05:35.492 EAL: Detected lcore 3 as core 3 on socket 0 00:05:35.492 EAL: Detected lcore 4 as core 4 on socket 0 00:05:35.492 EAL: Detected lcore 5 as core 5 on socket 0 00:05:35.492 EAL: Detected lcore 6 as core 8 on socket 0 00:05:35.492 EAL: Detected lcore 7 as core 9 on socket 0 00:05:35.492 EAL: Detected lcore 8 as core 10 on socket 0 00:05:35.492 EAL: Detected lcore 9 as core 11 on socket 0 00:05:35.492 EAL: Detected lcore 10 as core 12 on socket 0 00:05:35.492 EAL: Detected lcore 11 as core 16 on socket 0 00:05:35.492 EAL: Detected lcore 12 as core 17 on socket 0 00:05:35.492 EAL: Detected lcore 13 as core 18 on socket 0 00:05:35.492 EAL: Detected lcore 14 as core 19 on socket 0 00:05:35.492 EAL: Detected lcore 15 as core 20 on socket 0 00:05:35.492 EAL: Detected lcore 16 as core 21 on socket 0 00:05:35.492 EAL: Detected lcore 17 as core 24 on socket 0 00:05:35.492 EAL: Detected lcore 18 as core 25 on socket 0 00:05:35.492 EAL: Detected lcore 19 as core 26 on socket 0 00:05:35.492 EAL: Detected lcore 20 as core 27 on socket 0 00:05:35.492 EAL: Detected lcore 21 as core 28 on socket 0 00:05:35.492 EAL: Detected lcore 22 as core 0 on socket 1 00:05:35.492 EAL: Detected lcore 23 as core 1 on socket 1 00:05:35.492 EAL: Detected lcore 24 as core 2 on socket 1 00:05:35.492 EAL: Detected lcore 25 as core 3 on socket 1 00:05:35.492 EAL: Detected lcore 26 as core 4 on socket 1 00:05:35.492 EAL: Detected lcore 27 as core 5 on socket 1 00:05:35.492 EAL: Detected lcore 28 as core 8 on socket 1 00:05:35.492 EAL: Detected lcore 29 as core 9 on socket 1 00:05:35.492 EAL: Detected lcore 30 as core 10 on socket 1 00:05:35.492 EAL: Detected lcore 31 as core 11 on socket 1 00:05:35.492 EAL: Detected lcore 32 as core 12 on socket 1 00:05:35.492 EAL: Detected lcore 33 as core 16 on socket 1 00:05:35.492 EAL: Detected lcore 34 as core 17 on socket 1 00:05:35.492 EAL: Detected lcore 35 as core 18 on socket 1 00:05:35.492 EAL: Detected lcore 36 as core 19 on socket 1 00:05:35.492 EAL: Detected lcore 37 as core 20 on socket 1 00:05:35.492 EAL: Detected lcore 38 as core 21 on socket 1 00:05:35.492 EAL: Detected lcore 39 as core 24 on socket 1 00:05:35.492 EAL: Detected lcore 40 as core 25 on socket 1 00:05:35.492 EAL: Detected lcore 41 as core 26 on socket 1 00:05:35.492 EAL: Detected lcore 42 as core 27 on socket 1 00:05:35.492 EAL: Detected lcore 43 as core 28 on socket 1 00:05:35.492 EAL: Detected lcore 44 as core 0 on socket 0 00:05:35.492 EAL: Detected lcore 45 as core 1 on socket 0 00:05:35.492 EAL: Detected lcore 46 as core 2 on socket 0 00:05:35.492 EAL: Detected lcore 47 as core 3 on socket 0 00:05:35.492 EAL: Detected lcore 48 as core 4 on socket 0 00:05:35.492 EAL: Detected lcore 49 as core 5 on socket 0 00:05:35.492 EAL: Detected lcore 50 as core 8 on socket 0 00:05:35.492 EAL: Detected lcore 51 as core 9 on socket 0 00:05:35.492 EAL: Detected lcore 52 as core 10 on socket 0 00:05:35.492 EAL: Detected lcore 53 as core 11 on socket 0 00:05:35.492 EAL: Detected lcore 54 as core 12 on socket 0 00:05:35.492 EAL: Detected lcore 55 as core 16 on socket 0 00:05:35.492 EAL: Detected lcore 56 as core 17 on socket 0 00:05:35.492 EAL: Detected lcore 57 as core 18 on socket 0 00:05:35.492 EAL: Detected lcore 58 as core 19 on socket 0 00:05:35.492 EAL: Detected lcore 59 as core 20 on socket 0 00:05:35.492 EAL: Detected lcore 60 as core 21 on socket 0 00:05:35.492 EAL: Detected lcore 61 as core 24 on socket 0 00:05:35.492 EAL: Detected lcore 62 as core 25 on socket 0 00:05:35.492 EAL: Detected lcore 63 as core 26 on socket 0 00:05:35.492 EAL: Detected lcore 64 as core 27 on socket 0 00:05:35.492 EAL: Detected lcore 65 as core 28 on socket 0 00:05:35.492 EAL: Detected lcore 66 as core 0 on socket 1 00:05:35.492 EAL: Detected lcore 67 as core 1 on socket 1 00:05:35.492 EAL: Detected lcore 68 as core 2 on socket 1 00:05:35.492 EAL: Detected lcore 69 as core 3 on socket 1 00:05:35.492 EAL: Detected lcore 70 as core 4 on socket 1 00:05:35.492 EAL: Detected lcore 71 as core 5 on socket 1 00:05:35.492 EAL: Detected lcore 72 as core 8 on socket 1 00:05:35.492 EAL: Detected lcore 73 as core 9 on socket 1 00:05:35.492 EAL: Detected lcore 74 as core 10 on socket 1 00:05:35.492 EAL: Detected lcore 75 as core 11 on socket 1 00:05:35.492 EAL: Detected lcore 76 as core 12 on socket 1 00:05:35.492 EAL: Detected lcore 77 as core 16 on socket 1 00:05:35.492 EAL: Detected lcore 78 as core 17 on socket 1 00:05:35.492 EAL: Detected lcore 79 as core 18 on socket 1 00:05:35.492 EAL: Detected lcore 80 as core 19 on socket 1 00:05:35.492 EAL: Detected lcore 81 as core 20 on socket 1 00:05:35.492 EAL: Detected lcore 82 as core 21 on socket 1 00:05:35.492 EAL: Detected lcore 83 as core 24 on socket 1 00:05:35.492 EAL: Detected lcore 84 as core 25 on socket 1 00:05:35.492 EAL: Detected lcore 85 as core 26 on socket 1 00:05:35.492 EAL: Detected lcore 86 as core 27 on socket 1 00:05:35.492 EAL: Detected lcore 87 as core 28 on socket 1 00:05:35.492 EAL: Maximum logical cores by configuration: 128 00:05:35.492 EAL: Detected CPU lcores: 88 00:05:35.492 EAL: Detected NUMA nodes: 2 00:05:35.492 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:35.492 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:35.492 EAL: Checking presence of .so 'librte_eal.so' 00:05:35.492 EAL: Detected static linkage of DPDK 00:05:35.492 EAL: No shared files mode enabled, IPC will be disabled 00:05:35.492 EAL: Bus pci wants IOVA as 'DC' 00:05:35.492 EAL: Buses did not request a specific IOVA mode. 00:05:35.492 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:35.492 EAL: Selected IOVA mode 'VA' 00:05:35.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.492 EAL: Probing VFIO support... 00:05:35.492 EAL: IOMMU type 1 (Type 1) is supported 00:05:35.492 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:35.492 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:35.492 EAL: VFIO support initialized 00:05:35.492 EAL: Ask a virtual area of 0x2e000 bytes 00:05:35.492 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:35.492 EAL: Setting up physically contiguous memory... 00:05:35.492 EAL: Setting maximum number of open files to 524288 00:05:35.492 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:35.492 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:35.492 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:35.493 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:35.493 EAL: Ask a virtual area of 0x61000 bytes 00:05:35.493 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:35.493 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:35.493 EAL: Ask a virtual area of 0x400000000 bytes 00:05:35.493 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:35.493 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:35.493 EAL: Hugepages will be freed exactly as allocated. 00:05:35.493 EAL: No shared files mode enabled, IPC is disabled 00:05:35.493 EAL: No shared files mode enabled, IPC is disabled 00:05:35.493 EAL: TSC frequency is ~2100000 KHz 00:05:35.493 EAL: Main lcore 0 is ready (tid=7fa023370a40;cpuset=[0]) 00:05:35.493 EAL: Trying to obtain current memory policy. 00:05:35.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.493 EAL: Restoring previous memory policy: 0 00:05:35.493 EAL: request: mp_malloc_sync 00:05:35.493 EAL: No shared files mode enabled, IPC is disabled 00:05:35.493 EAL: Heap on socket 0 was expanded by 2MB 00:05:35.493 EAL: No shared files mode enabled, IPC is disabled 00:05:35.493 EAL: Mem event callback 'spdk:(nil)' registered 00:05:35.493 00:05:35.493 00:05:35.493 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.493 http://cunit.sourceforge.net/ 00:05:35.493 00:05:35.493 00:05:35.493 Suite: components_suite 00:05:35.752 Test: vtophys_malloc_test ...passed 00:05:35.752 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:35.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.752 EAL: Restoring previous memory policy: 4 00:05:35.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.752 EAL: request: mp_malloc_sync 00:05:35.752 EAL: No shared files mode enabled, IPC is disabled 00:05:35.752 EAL: Heap on socket 0 was expanded by 4MB 00:05:35.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.752 EAL: request: mp_malloc_sync 00:05:35.752 EAL: No shared files mode enabled, IPC is disabled 00:05:35.752 EAL: Heap on socket 0 was shrunk by 4MB 00:05:35.752 EAL: Trying to obtain current memory policy. 00:05:35.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.752 EAL: Restoring previous memory policy: 4 00:05:35.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.752 EAL: request: mp_malloc_sync 00:05:35.752 EAL: No shared files mode enabled, IPC is disabled 00:05:35.752 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.010 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.010 EAL: request: mp_malloc_sync 00:05:36.010 EAL: No shared files mode enabled, IPC is disabled 00:05:36.010 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.011 EAL: Trying to obtain current memory policy. 00:05:36.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.011 EAL: Restoring previous memory policy: 4 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.011 EAL: Trying to obtain current memory policy. 00:05:36.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.011 EAL: Restoring previous memory policy: 4 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.011 EAL: Trying to obtain current memory policy. 00:05:36.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.011 EAL: Restoring previous memory policy: 4 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was shrunk by 34MB 00:05:36.011 EAL: Trying to obtain current memory policy. 00:05:36.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.011 EAL: Restoring previous memory policy: 4 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was expanded by 66MB 00:05:36.011 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.011 EAL: request: mp_malloc_sync 00:05:36.011 EAL: No shared files mode enabled, IPC is disabled 00:05:36.011 EAL: Heap on socket 0 was shrunk by 66MB 00:05:36.269 EAL: Trying to obtain current memory policy. 00:05:36.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.269 EAL: Restoring previous memory policy: 4 00:05:36.269 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.269 EAL: request: mp_malloc_sync 00:05:36.269 EAL: No shared files mode enabled, IPC is disabled 00:05:36.269 EAL: Heap on socket 0 was expanded by 130MB 00:05:36.269 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.269 EAL: request: mp_malloc_sync 00:05:36.269 EAL: No shared files mode enabled, IPC is disabled 00:05:36.269 EAL: Heap on socket 0 was shrunk by 130MB 00:05:36.527 EAL: Trying to obtain current memory policy. 00:05:36.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.527 EAL: Restoring previous memory policy: 4 00:05:36.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.527 EAL: request: mp_malloc_sync 00:05:36.527 EAL: No shared files mode enabled, IPC is disabled 00:05:36.527 EAL: Heap on socket 0 was expanded by 258MB 00:05:36.786 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.786 EAL: request: mp_malloc_sync 00:05:36.786 EAL: No shared files mode enabled, IPC is disabled 00:05:36.786 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.045 EAL: Trying to obtain current memory policy. 00:05:37.045 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.303 EAL: Restoring previous memory policy: 4 00:05:37.303 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.303 EAL: request: mp_malloc_sync 00:05:37.303 EAL: No shared files mode enabled, IPC is disabled 00:05:37.303 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.869 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.869 EAL: request: mp_malloc_sync 00:05:37.869 EAL: No shared files mode enabled, IPC is disabled 00:05:37.869 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.435 EAL: Trying to obtain current memory policy. 00:05:38.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.435 EAL: Restoring previous memory policy: 4 00:05:38.435 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.435 EAL: request: mp_malloc_sync 00:05:38.435 EAL: No shared files mode enabled, IPC is disabled 00:05:38.435 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.813 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.813 EAL: request: mp_malloc_sync 00:05:39.813 EAL: No shared files mode enabled, IPC is disabled 00:05:39.813 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.749 passed 00:05:40.749 00:05:40.749 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.749 suites 1 1 n/a 0 0 00:05:40.749 tests 2 2 2 0 0 00:05:40.749 asserts 497 497 497 0 n/a 00:05:40.749 00:05:40.749 Elapsed time = 5.140 seconds 00:05:40.749 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.749 EAL: request: mp_malloc_sync 00:05:40.749 EAL: No shared files mode enabled, IPC is disabled 00:05:40.749 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.749 EAL: No shared files mode enabled, IPC is disabled 00:05:40.749 EAL: No shared files mode enabled, IPC is disabled 00:05:40.749 EAL: No shared files mode enabled, IPC is disabled 00:05:40.749 00:05:40.749 real 0m5.365s 00:05:40.749 user 0m4.562s 00:05:40.749 sys 0m0.749s 00:05:40.749 08:18:53 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.750 08:18:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.750 ************************************ 00:05:40.750 END TEST env_vtophys 00:05:40.750 ************************************ 00:05:40.750 08:18:53 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.750 08:18:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.750 08:18:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.750 08:18:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.750 08:18:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.750 ************************************ 00:05:40.750 START TEST env_pci 00:05:40.750 ************************************ 00:05:40.750 08:18:53 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:41.008 00:05:41.008 00:05:41.008 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.008 http://cunit.sourceforge.net/ 00:05:41.008 00:05:41.008 00:05:41.008 Suite: pci 00:05:41.008 Test: pci_hook ...[2024-07-23 08:18:53.297322] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 592479 has claimed it 00:05:41.008 EAL: Cannot find device (10000:00:01.0) 00:05:41.008 EAL: Failed to attach device on primary process 00:05:41.008 passed 00:05:41.008 00:05:41.008 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.008 suites 1 1 n/a 0 0 00:05:41.008 tests 1 1 1 0 0 00:05:41.008 asserts 25 25 25 0 n/a 00:05:41.008 00:05:41.008 Elapsed time = 0.047 seconds 00:05:41.008 00:05:41.008 real 0m0.110s 00:05:41.008 user 0m0.042s 00:05:41.008 sys 0m0.067s 00:05:41.008 08:18:53 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.008 08:18:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:41.008 ************************************ 00:05:41.008 END TEST env_pci 00:05:41.008 ************************************ 00:05:41.008 08:18:53 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.008 08:18:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:41.008 08:18:53 env -- env/env.sh@15 -- # uname 00:05:41.008 08:18:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:41.008 08:18:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:41.008 08:18:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.008 08:18:53 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:41.008 08:18:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.008 08:18:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.008 ************************************ 00:05:41.008 START TEST env_dpdk_post_init 00:05:41.008 ************************************ 00:05:41.008 08:18:53 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:41.008 EAL: Detected CPU lcores: 88 00:05:41.008 EAL: Detected NUMA nodes: 2 00:05:41.008 EAL: Detected static linkage of DPDK 00:05:41.008 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:41.267 EAL: Selected IOVA mode 'VA' 00:05:41.267 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.267 EAL: VFIO support initialized 00:05:41.267 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.267 EAL: Using IOMMU type 1 (Type 1) 00:05:42.422 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:dc:00.0 (socket 1) 00:05:43.360 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:dd:00.0 (socket 1) 00:05:44.275 EAL: Probe PCI driver: spdk_nvme (8086:0953) device: 0000:de:00.0 (socket 1) 00:05:45.211 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:df:00.0 (socket 1) 00:05:49.397 EAL: Releasing PCI mapped resource for 0000:dd:00.0 00:05:49.397 EAL: Calling pci_unmap_resource for 0000:dd:00.0 at 0x202001004000 00:05:49.397 EAL: Releasing PCI mapped resource for 0000:df:00.0 00:05:49.397 EAL: Calling pci_unmap_resource for 0000:df:00.0 at 0x20200100c000 00:05:49.654 EAL: Releasing PCI mapped resource for 0000:de:00.0 00:05:49.654 EAL: Calling pci_unmap_resource for 0000:de:00.0 at 0x202001008000 00:05:49.913 EAL: Releasing PCI mapped resource for 0000:dc:00.0 00:05:49.913 EAL: Calling pci_unmap_resource for 0000:dc:00.0 at 0x202001000000 00:05:50.480 Starting DPDK initialization... 00:05:50.480 Starting SPDK post initialization... 00:05:50.480 SPDK NVMe probe 00:05:50.480 Attaching to 0000:dc:00.0 00:05:50.480 Attaching to 0000:dd:00.0 00:05:50.480 Attaching to 0000:de:00.0 00:05:50.480 Attaching to 0000:df:00.0 00:05:50.480 Attached to 0000:dc:00.0 00:05:50.480 Attached to 0000:de:00.0 00:05:50.480 Attached to 0000:dd:00.0 00:05:50.480 Attached to 0000:df:00.0 00:05:50.480 Cleaning up... 00:05:50.480 00:05:50.480 real 0m9.285s 00:05:50.480 user 0m6.054s 00:05:50.480 sys 0m0.281s 00:05:50.480 08:19:02 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.480 08:19:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.480 ************************************ 00:05:50.480 END TEST env_dpdk_post_init 00:05:50.480 ************************************ 00:05:50.480 08:19:02 env -- common/autotest_common.sh@1142 -- # return 0 00:05:50.480 08:19:02 env -- env/env.sh@26 -- # uname 00:05:50.480 08:19:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:50.480 08:19:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:50.480 08:19:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.480 08:19:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.480 08:19:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.480 ************************************ 00:05:50.480 START TEST env_mem_callbacks 00:05:50.480 ************************************ 00:05:50.480 08:19:02 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:50.480 EAL: Detected CPU lcores: 88 00:05:50.480 EAL: Detected NUMA nodes: 2 00:05:50.480 EAL: Detected static linkage of DPDK 00:05:50.480 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:50.480 EAL: Selected IOVA mode 'VA' 00:05:50.480 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.480 EAL: VFIO support initialized 00:05:50.480 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:50.480 00:05:50.480 00:05:50.480 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.480 http://cunit.sourceforge.net/ 00:05:50.480 00:05:50.480 00:05:50.480 Suite: memory 00:05:50.480 Test: test ... 00:05:50.480 register 0x200000200000 2097152 00:05:50.480 malloc 3145728 00:05:50.480 register 0x200000400000 4194304 00:05:50.480 buf 0x2000004fffc0 len 3145728 PASSED 00:05:50.480 malloc 64 00:05:50.480 buf 0x2000004ffec0 len 64 PASSED 00:05:50.480 malloc 4194304 00:05:50.480 register 0x200000800000 6291456 00:05:50.480 buf 0x2000009fffc0 len 4194304 PASSED 00:05:50.480 free 0x2000004fffc0 3145728 00:05:50.480 free 0x2000004ffec0 64 00:05:50.480 unregister 0x200000400000 4194304 PASSED 00:05:50.480 free 0x2000009fffc0 4194304 00:05:50.480 unregister 0x200000800000 6291456 PASSED 00:05:50.480 malloc 8388608 00:05:50.480 register 0x200000400000 10485760 00:05:50.480 buf 0x2000005fffc0 len 8388608 PASSED 00:05:50.480 free 0x2000005fffc0 8388608 00:05:50.480 unregister 0x200000400000 10485760 PASSED 00:05:50.480 passed 00:05:50.480 00:05:50.480 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.480 suites 1 1 n/a 0 0 00:05:50.480 tests 1 1 1 0 0 00:05:50.480 asserts 15 15 15 0 n/a 00:05:50.480 00:05:50.480 Elapsed time = 0.055 seconds 00:05:50.480 00:05:50.480 real 0m0.163s 00:05:50.480 user 0m0.089s 00:05:50.480 sys 0m0.073s 00:05:50.480 08:19:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.480 08:19:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:50.480 ************************************ 00:05:50.480 END TEST env_mem_callbacks 00:05:50.480 ************************************ 00:05:50.739 08:19:03 env -- common/autotest_common.sh@1142 -- # return 0 00:05:50.739 00:05:50.739 real 0m15.565s 00:05:50.739 user 0m11.112s 00:05:50.739 sys 0m1.477s 00:05:50.739 08:19:03 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.739 08:19:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:50.739 ************************************ 00:05:50.739 END TEST env 00:05:50.739 ************************************ 00:05:50.739 08:19:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.739 08:19:03 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:50.739 08:19:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.739 08:19:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.739 08:19:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.739 ************************************ 00:05:50.739 START TEST rpc 00:05:50.739 ************************************ 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:50.739 * Looking for test storage... 00:05:50.739 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:50.739 08:19:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=594086 00:05:50.739 08:19:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.739 08:19:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:50.739 08:19:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 594086 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@829 -- # '[' -z 594086 ']' 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.739 08:19:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.739 [2024-07-23 08:19:03.209584] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:50.739 [2024-07-23 08:19:03.209674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594086 ] 00:05:50.998 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.998 [2024-07-23 08:19:03.329678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.998 [2024-07-23 08:19:03.492514] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:50.998 [2024-07-23 08:19:03.492561] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 594086' to capture a snapshot of events at runtime. 00:05:50.998 [2024-07-23 08:19:03.492571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.998 [2024-07-23 08:19:03.492580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.998 [2024-07-23 08:19:03.492587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid594086 for offline analysis/debug. 00:05:50.998 [2024-07-23 08:19:03.492615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.564 08:19:04 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.564 08:19:04 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:51.564 08:19:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:51.564 08:19:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:51.564 08:19:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:51.564 08:19:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:51.564 08:19:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.564 08:19:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.564 08:19:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.822 ************************************ 00:05:51.822 START TEST rpc_integrity 00:05:51.822 ************************************ 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.822 { 00:05:51.822 "name": "Malloc0", 00:05:51.822 "aliases": [ 00:05:51.822 "27c3cbfc-0a11-4bb0-b648-767277535373" 00:05:51.822 ], 00:05:51.822 "product_name": "Malloc disk", 00:05:51.822 "block_size": 512, 00:05:51.822 "num_blocks": 16384, 00:05:51.822 "uuid": "27c3cbfc-0a11-4bb0-b648-767277535373", 00:05:51.822 "assigned_rate_limits": { 00:05:51.822 "rw_ios_per_sec": 0, 00:05:51.822 "rw_mbytes_per_sec": 0, 00:05:51.822 "r_mbytes_per_sec": 0, 00:05:51.822 "w_mbytes_per_sec": 0 00:05:51.822 }, 00:05:51.822 "claimed": false, 00:05:51.822 "zoned": false, 00:05:51.822 "supported_io_types": { 00:05:51.822 "read": true, 00:05:51.822 "write": true, 00:05:51.822 "unmap": true, 00:05:51.822 "flush": true, 00:05:51.822 "reset": true, 00:05:51.822 "nvme_admin": false, 00:05:51.822 "nvme_io": false, 00:05:51.822 "nvme_io_md": false, 00:05:51.822 "write_zeroes": true, 00:05:51.822 "zcopy": true, 00:05:51.822 "get_zone_info": false, 00:05:51.822 "zone_management": false, 00:05:51.822 "zone_append": false, 00:05:51.822 "compare": false, 00:05:51.822 "compare_and_write": false, 00:05:51.822 "abort": true, 00:05:51.822 "seek_hole": false, 00:05:51.822 "seek_data": false, 00:05:51.822 "copy": true, 00:05:51.822 "nvme_iov_md": false 00:05:51.822 }, 00:05:51.822 "memory_domains": [ 00:05:51.822 { 00:05:51.822 "dma_device_id": "system", 00:05:51.822 "dma_device_type": 1 00:05:51.822 }, 00:05:51.822 { 00:05:51.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.822 "dma_device_type": 2 00:05:51.822 } 00:05:51.822 ], 00:05:51.822 "driver_specific": {} 00:05:51.822 } 00:05:51.822 ]' 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.822 [2024-07-23 08:19:04.224658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:51.822 [2024-07-23 08:19:04.224705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.822 [2024-07-23 08:19:04.224726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002be80 00:05:51.822 [2024-07-23 08:19:04.224740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.822 [2024-07-23 08:19:04.226443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.822 [2024-07-23 08:19:04.226471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.822 Passthru0 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.822 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.822 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.822 { 00:05:51.822 "name": "Malloc0", 00:05:51.822 "aliases": [ 00:05:51.822 "27c3cbfc-0a11-4bb0-b648-767277535373" 00:05:51.822 ], 00:05:51.822 "product_name": "Malloc disk", 00:05:51.822 "block_size": 512, 00:05:51.822 "num_blocks": 16384, 00:05:51.822 "uuid": "27c3cbfc-0a11-4bb0-b648-767277535373", 00:05:51.822 "assigned_rate_limits": { 00:05:51.822 "rw_ios_per_sec": 0, 00:05:51.822 "rw_mbytes_per_sec": 0, 00:05:51.822 "r_mbytes_per_sec": 0, 00:05:51.822 "w_mbytes_per_sec": 0 00:05:51.822 }, 00:05:51.822 "claimed": true, 00:05:51.822 "claim_type": "exclusive_write", 00:05:51.822 "zoned": false, 00:05:51.822 "supported_io_types": { 00:05:51.822 "read": true, 00:05:51.822 "write": true, 00:05:51.822 "unmap": true, 00:05:51.822 "flush": true, 00:05:51.822 "reset": true, 00:05:51.822 "nvme_admin": false, 00:05:51.822 "nvme_io": false, 00:05:51.822 "nvme_io_md": false, 00:05:51.822 "write_zeroes": true, 00:05:51.822 "zcopy": true, 00:05:51.822 "get_zone_info": false, 00:05:51.822 "zone_management": false, 00:05:51.822 "zone_append": false, 00:05:51.822 "compare": false, 00:05:51.822 "compare_and_write": false, 00:05:51.822 "abort": true, 00:05:51.822 "seek_hole": false, 00:05:51.822 "seek_data": false, 00:05:51.822 "copy": true, 00:05:51.822 "nvme_iov_md": false 00:05:51.822 }, 00:05:51.822 "memory_domains": [ 00:05:51.822 { 00:05:51.822 "dma_device_id": "system", 00:05:51.822 "dma_device_type": 1 00:05:51.822 }, 00:05:51.822 { 00:05:51.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.822 "dma_device_type": 2 00:05:51.822 } 00:05:51.822 ], 00:05:51.822 "driver_specific": {} 00:05:51.822 }, 00:05:51.822 { 00:05:51.822 "name": "Passthru0", 00:05:51.822 "aliases": [ 00:05:51.822 "69719d14-678b-501c-9ff9-9d833de7786d" 00:05:51.822 ], 00:05:51.822 "product_name": "passthru", 00:05:51.822 "block_size": 512, 00:05:51.822 "num_blocks": 16384, 00:05:51.822 "uuid": "69719d14-678b-501c-9ff9-9d833de7786d", 00:05:51.822 "assigned_rate_limits": { 00:05:51.822 "rw_ios_per_sec": 0, 00:05:51.822 "rw_mbytes_per_sec": 0, 00:05:51.822 "r_mbytes_per_sec": 0, 00:05:51.822 "w_mbytes_per_sec": 0 00:05:51.822 }, 00:05:51.822 "claimed": false, 00:05:51.822 "zoned": false, 00:05:51.822 "supported_io_types": { 00:05:51.822 "read": true, 00:05:51.822 "write": true, 00:05:51.822 "unmap": true, 00:05:51.822 "flush": true, 00:05:51.822 "reset": true, 00:05:51.822 "nvme_admin": false, 00:05:51.822 "nvme_io": false, 00:05:51.822 "nvme_io_md": false, 00:05:51.822 "write_zeroes": true, 00:05:51.822 "zcopy": true, 00:05:51.822 "get_zone_info": false, 00:05:51.822 "zone_management": false, 00:05:51.822 "zone_append": false, 00:05:51.822 "compare": false, 00:05:51.822 "compare_and_write": false, 00:05:51.822 "abort": true, 00:05:51.822 "seek_hole": false, 00:05:51.823 "seek_data": false, 00:05:51.823 "copy": true, 00:05:51.823 "nvme_iov_md": false 00:05:51.823 }, 00:05:51.823 "memory_domains": [ 00:05:51.823 { 00:05:51.823 "dma_device_id": "system", 00:05:51.823 "dma_device_type": 1 00:05:51.823 }, 00:05:51.823 { 00:05:51.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.823 "dma_device_type": 2 00:05:51.823 } 00:05:51.823 ], 00:05:51.823 "driver_specific": { 00:05:51.823 "passthru": { 00:05:51.823 "name": "Passthru0", 00:05:51.823 "base_bdev_name": "Malloc0" 00:05:51.823 } 00:05:51.823 } 00:05:51.823 } 00:05:51.823 ]' 00:05:51.823 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.823 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.823 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.823 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.823 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.823 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.080 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.080 08:19:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.080 00:05:52.080 real 0m0.294s 00:05:52.080 user 0m0.175s 00:05:52.080 sys 0m0.037s 00:05:52.080 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.080 08:19:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.080 ************************************ 00:05:52.081 END TEST rpc_integrity 00:05:52.081 ************************************ 00:05:52.081 08:19:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.081 08:19:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:52.081 08:19:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.081 08:19:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.081 08:19:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 ************************************ 00:05:52.081 START TEST rpc_plugins 00:05:52.081 ************************************ 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:52.081 { 00:05:52.081 "name": "Malloc1", 00:05:52.081 "aliases": [ 00:05:52.081 "8dcb42f0-dfa7-477a-981b-6a4fb8b1536d" 00:05:52.081 ], 00:05:52.081 "product_name": "Malloc disk", 00:05:52.081 "block_size": 4096, 00:05:52.081 "num_blocks": 256, 00:05:52.081 "uuid": "8dcb42f0-dfa7-477a-981b-6a4fb8b1536d", 00:05:52.081 "assigned_rate_limits": { 00:05:52.081 "rw_ios_per_sec": 0, 00:05:52.081 "rw_mbytes_per_sec": 0, 00:05:52.081 "r_mbytes_per_sec": 0, 00:05:52.081 "w_mbytes_per_sec": 0 00:05:52.081 }, 00:05:52.081 "claimed": false, 00:05:52.081 "zoned": false, 00:05:52.081 "supported_io_types": { 00:05:52.081 "read": true, 00:05:52.081 "write": true, 00:05:52.081 "unmap": true, 00:05:52.081 "flush": true, 00:05:52.081 "reset": true, 00:05:52.081 "nvme_admin": false, 00:05:52.081 "nvme_io": false, 00:05:52.081 "nvme_io_md": false, 00:05:52.081 "write_zeroes": true, 00:05:52.081 "zcopy": true, 00:05:52.081 "get_zone_info": false, 00:05:52.081 "zone_management": false, 00:05:52.081 "zone_append": false, 00:05:52.081 "compare": false, 00:05:52.081 "compare_and_write": false, 00:05:52.081 "abort": true, 00:05:52.081 "seek_hole": false, 00:05:52.081 "seek_data": false, 00:05:52.081 "copy": true, 00:05:52.081 "nvme_iov_md": false 00:05:52.081 }, 00:05:52.081 "memory_domains": [ 00:05:52.081 { 00:05:52.081 "dma_device_id": "system", 00:05:52.081 "dma_device_type": 1 00:05:52.081 }, 00:05:52.081 { 00:05:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.081 "dma_device_type": 2 00:05:52.081 } 00:05:52.081 ], 00:05:52.081 "driver_specific": {} 00:05:52.081 } 00:05:52.081 ]' 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:52.081 08:19:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:52.081 00:05:52.081 real 0m0.137s 00:05:52.081 user 0m0.082s 00:05:52.081 sys 0m0.020s 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.081 08:19:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:52.081 ************************************ 00:05:52.081 END TEST rpc_plugins 00:05:52.081 ************************************ 00:05:52.339 08:19:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.339 08:19:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:52.339 08:19:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.339 08:19:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.339 08:19:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.339 ************************************ 00:05:52.339 START TEST rpc_trace_cmd_test 00:05:52.339 ************************************ 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:52.339 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid594086", 00:05:52.339 "tpoint_group_mask": "0x8", 00:05:52.339 "iscsi_conn": { 00:05:52.339 "mask": "0x2", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "scsi": { 00:05:52.339 "mask": "0x4", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "bdev": { 00:05:52.339 "mask": "0x8", 00:05:52.339 "tpoint_mask": "0xffffffffffffffff" 00:05:52.339 }, 00:05:52.339 "nvmf_rdma": { 00:05:52.339 "mask": "0x10", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "nvmf_tcp": { 00:05:52.339 "mask": "0x20", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "ftl": { 00:05:52.339 "mask": "0x40", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "blobfs": { 00:05:52.339 "mask": "0x80", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "dsa": { 00:05:52.339 "mask": "0x200", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "thread": { 00:05:52.339 "mask": "0x400", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "nvme_pcie": { 00:05:52.339 "mask": "0x800", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "iaa": { 00:05:52.339 "mask": "0x1000", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "nvme_tcp": { 00:05:52.339 "mask": "0x2000", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "bdev_nvme": { 00:05:52.339 "mask": "0x4000", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 }, 00:05:52.339 "sock": { 00:05:52.339 "mask": "0x8000", 00:05:52.339 "tpoint_mask": "0x0" 00:05:52.339 } 00:05:52.339 }' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:52.339 00:05:52.339 real 0m0.198s 00:05:52.339 user 0m0.163s 00:05:52.339 sys 0m0.027s 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.339 08:19:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:52.339 ************************************ 00:05:52.339 END TEST rpc_trace_cmd_test 00:05:52.339 ************************************ 00:05:52.598 08:19:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.598 08:19:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:52.598 08:19:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:52.598 08:19:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:52.598 08:19:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.598 08:19:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.598 08:19:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 ************************************ 00:05:52.598 START TEST rpc_daemon_integrity 00:05:52.598 ************************************ 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.598 08:19:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:52.598 { 00:05:52.598 "name": "Malloc2", 00:05:52.598 "aliases": [ 00:05:52.598 "c8022a01-67f9-4569-b4e6-984a0d8de1df" 00:05:52.598 ], 00:05:52.598 "product_name": "Malloc disk", 00:05:52.598 "block_size": 512, 00:05:52.598 "num_blocks": 16384, 00:05:52.598 "uuid": "c8022a01-67f9-4569-b4e6-984a0d8de1df", 00:05:52.598 "assigned_rate_limits": { 00:05:52.598 "rw_ios_per_sec": 0, 00:05:52.598 "rw_mbytes_per_sec": 0, 00:05:52.598 "r_mbytes_per_sec": 0, 00:05:52.598 "w_mbytes_per_sec": 0 00:05:52.598 }, 00:05:52.598 "claimed": false, 00:05:52.598 "zoned": false, 00:05:52.598 "supported_io_types": { 00:05:52.598 "read": true, 00:05:52.598 "write": true, 00:05:52.598 "unmap": true, 00:05:52.598 "flush": true, 00:05:52.598 "reset": true, 00:05:52.598 "nvme_admin": false, 00:05:52.598 "nvme_io": false, 00:05:52.598 "nvme_io_md": false, 00:05:52.598 "write_zeroes": true, 00:05:52.598 "zcopy": true, 00:05:52.598 "get_zone_info": false, 00:05:52.598 "zone_management": false, 00:05:52.598 "zone_append": false, 00:05:52.598 "compare": false, 00:05:52.598 "compare_and_write": false, 00:05:52.598 "abort": true, 00:05:52.598 "seek_hole": false, 00:05:52.598 "seek_data": false, 00:05:52.598 "copy": true, 00:05:52.598 "nvme_iov_md": false 00:05:52.598 }, 00:05:52.598 "memory_domains": [ 00:05:52.598 { 00:05:52.598 "dma_device_id": "system", 00:05:52.598 "dma_device_type": 1 00:05:52.598 }, 00:05:52.598 { 00:05:52.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.598 "dma_device_type": 2 00:05:52.598 } 00:05:52.598 ], 00:05:52.598 "driver_specific": {} 00:05:52.598 } 00:05:52.598 ]' 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 [2024-07-23 08:19:05.053752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:52.598 [2024-07-23 08:19:05.053801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:52.598 [2024-07-23 08:19:05.053821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d080 00:05:52.598 [2024-07-23 08:19:05.053833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:52.598 [2024-07-23 08:19:05.055488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:52.598 [2024-07-23 08:19:05.055515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:52.598 Passthru0 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.598 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:52.598 { 00:05:52.598 "name": "Malloc2", 00:05:52.598 "aliases": [ 00:05:52.598 "c8022a01-67f9-4569-b4e6-984a0d8de1df" 00:05:52.598 ], 00:05:52.598 "product_name": "Malloc disk", 00:05:52.598 "block_size": 512, 00:05:52.598 "num_blocks": 16384, 00:05:52.598 "uuid": "c8022a01-67f9-4569-b4e6-984a0d8de1df", 00:05:52.598 "assigned_rate_limits": { 00:05:52.598 "rw_ios_per_sec": 0, 00:05:52.598 "rw_mbytes_per_sec": 0, 00:05:52.598 "r_mbytes_per_sec": 0, 00:05:52.598 "w_mbytes_per_sec": 0 00:05:52.598 }, 00:05:52.598 "claimed": true, 00:05:52.598 "claim_type": "exclusive_write", 00:05:52.598 "zoned": false, 00:05:52.598 "supported_io_types": { 00:05:52.598 "read": true, 00:05:52.598 "write": true, 00:05:52.598 "unmap": true, 00:05:52.598 "flush": true, 00:05:52.598 "reset": true, 00:05:52.598 "nvme_admin": false, 00:05:52.598 "nvme_io": false, 00:05:52.598 "nvme_io_md": false, 00:05:52.598 "write_zeroes": true, 00:05:52.598 "zcopy": true, 00:05:52.598 "get_zone_info": false, 00:05:52.598 "zone_management": false, 00:05:52.598 "zone_append": false, 00:05:52.598 "compare": false, 00:05:52.598 "compare_and_write": false, 00:05:52.598 "abort": true, 00:05:52.598 "seek_hole": false, 00:05:52.598 "seek_data": false, 00:05:52.598 "copy": true, 00:05:52.598 "nvme_iov_md": false 00:05:52.598 }, 00:05:52.598 "memory_domains": [ 00:05:52.598 { 00:05:52.598 "dma_device_id": "system", 00:05:52.598 "dma_device_type": 1 00:05:52.598 }, 00:05:52.598 { 00:05:52.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.598 "dma_device_type": 2 00:05:52.598 } 00:05:52.598 ], 00:05:52.598 "driver_specific": {} 00:05:52.598 }, 00:05:52.598 { 00:05:52.598 "name": "Passthru0", 00:05:52.598 "aliases": [ 00:05:52.598 "dd64bd5d-5615-5d57-bc1e-748d6a7ae1b8" 00:05:52.598 ], 00:05:52.598 "product_name": "passthru", 00:05:52.598 "block_size": 512, 00:05:52.598 "num_blocks": 16384, 00:05:52.598 "uuid": "dd64bd5d-5615-5d57-bc1e-748d6a7ae1b8", 00:05:52.598 "assigned_rate_limits": { 00:05:52.598 "rw_ios_per_sec": 0, 00:05:52.598 "rw_mbytes_per_sec": 0, 00:05:52.598 "r_mbytes_per_sec": 0, 00:05:52.598 "w_mbytes_per_sec": 0 00:05:52.598 }, 00:05:52.598 "claimed": false, 00:05:52.598 "zoned": false, 00:05:52.598 "supported_io_types": { 00:05:52.598 "read": true, 00:05:52.598 "write": true, 00:05:52.598 "unmap": true, 00:05:52.598 "flush": true, 00:05:52.598 "reset": true, 00:05:52.598 "nvme_admin": false, 00:05:52.598 "nvme_io": false, 00:05:52.598 "nvme_io_md": false, 00:05:52.598 "write_zeroes": true, 00:05:52.598 "zcopy": true, 00:05:52.598 "get_zone_info": false, 00:05:52.598 "zone_management": false, 00:05:52.598 "zone_append": false, 00:05:52.598 "compare": false, 00:05:52.598 "compare_and_write": false, 00:05:52.598 "abort": true, 00:05:52.598 "seek_hole": false, 00:05:52.598 "seek_data": false, 00:05:52.599 "copy": true, 00:05:52.599 "nvme_iov_md": false 00:05:52.599 }, 00:05:52.599 "memory_domains": [ 00:05:52.599 { 00:05:52.599 "dma_device_id": "system", 00:05:52.599 "dma_device_type": 1 00:05:52.599 }, 00:05:52.599 { 00:05:52.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:52.599 "dma_device_type": 2 00:05:52.599 } 00:05:52.599 ], 00:05:52.599 "driver_specific": { 00:05:52.599 "passthru": { 00:05:52.599 "name": "Passthru0", 00:05:52.599 "base_bdev_name": "Malloc2" 00:05:52.599 } 00:05:52.599 } 00:05:52.599 } 00:05:52.599 ]' 00:05:52.599 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:52.857 00:05:52.857 real 0m0.289s 00:05:52.857 user 0m0.175s 00:05:52.857 sys 0m0.032s 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.857 08:19:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:52.857 ************************************ 00:05:52.857 END TEST rpc_daemon_integrity 00:05:52.857 ************************************ 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.857 08:19:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:52.857 08:19:05 rpc -- rpc/rpc.sh@84 -- # killprocess 594086 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@948 -- # '[' -z 594086 ']' 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@952 -- # kill -0 594086 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@953 -- # uname 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594086 00:05:52.857 08:19:05 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.858 08:19:05 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.858 08:19:05 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594086' 00:05:52.858 killing process with pid 594086 00:05:52.858 08:19:05 rpc -- common/autotest_common.sh@967 -- # kill 594086 00:05:52.858 08:19:05 rpc -- common/autotest_common.sh@972 -- # wait 594086 00:05:54.760 00:05:54.760 real 0m3.698s 00:05:54.760 user 0m4.245s 00:05:54.760 sys 0m0.811s 00:05:54.760 08:19:06 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.760 08:19:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.760 ************************************ 00:05:54.760 END TEST rpc 00:05:54.760 ************************************ 00:05:54.760 08:19:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.760 08:19:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:54.760 08:19:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.760 08:19:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.760 08:19:06 -- common/autotest_common.sh@10 -- # set +x 00:05:54.760 ************************************ 00:05:54.760 START TEST skip_rpc 00:05:54.760 ************************************ 00:05:54.760 08:19:06 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:54.760 * Looking for test storage... 00:05:54.760 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:54.760 08:19:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:54.760 08:19:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:54.760 08:19:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:54.760 08:19:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.760 08:19:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.760 08:19:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.760 ************************************ 00:05:54.760 START TEST skip_rpc 00:05:54.760 ************************************ 00:05:54.760 08:19:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:54.760 08:19:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=594900 00:05:54.760 08:19:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.760 08:19:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:54.760 08:19:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:54.760 [2024-07-23 08:19:07.008033] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:54.760 [2024-07-23 08:19:07.008120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594900 ] 00:05:54.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.760 [2024-07-23 08:19:07.119374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.760 [2024-07-23 08:19:07.277854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 594900 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 594900 ']' 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 594900 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.045 08:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 594900 00:06:00.045 08:19:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.045 08:19:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.045 08:19:12 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 594900' 00:06:00.045 killing process with pid 594900 00:06:00.045 08:19:12 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 594900 00:06:00.045 08:19:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 594900 00:06:01.425 00:06:01.425 real 0m6.545s 00:06:01.425 user 0m6.202s 00:06:01.425 sys 0m0.360s 00:06:01.425 08:19:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.425 08:19:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.425 ************************************ 00:06:01.425 END TEST skip_rpc 00:06:01.425 ************************************ 00:06:01.425 08:19:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.425 08:19:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:01.426 08:19:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.426 08:19:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.426 08:19:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.426 ************************************ 00:06:01.426 START TEST skip_rpc_with_json 00:06:01.426 ************************************ 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=595999 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 595999 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 595999 ']' 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.426 08:19:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.426 [2024-07-23 08:19:13.623443] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:01.426 [2024-07-23 08:19:13.623532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid595999 ] 00:06:01.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.426 [2024-07-23 08:19:13.737930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.426 [2024-07-23 08:19:13.899587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.995 [2024-07-23 08:19:14.482379] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:01.995 request: 00:06:01.995 { 00:06:01.995 "trtype": "tcp", 00:06:01.995 "method": "nvmf_get_transports", 00:06:01.995 "req_id": 1 00:06:01.995 } 00:06:01.995 Got JSON-RPC error response 00:06:01.995 response: 00:06:01.995 { 00:06:01.995 "code": -19, 00:06:01.995 "message": "No such device" 00:06:01.995 } 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.995 [2024-07-23 08:19:14.494476] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.995 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.255 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.255 08:19:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:02.255 { 00:06:02.255 "subsystems": [ 00:06:02.255 { 00:06:02.255 "subsystem": "scheduler", 00:06:02.255 "config": [ 00:06:02.255 { 00:06:02.255 "method": "framework_set_scheduler", 00:06:02.255 "params": { 00:06:02.255 "name": "static" 00:06:02.255 } 00:06:02.255 } 00:06:02.255 ] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "vmd", 00:06:02.255 "config": [] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "sock", 00:06:02.255 "config": [ 00:06:02.255 { 00:06:02.255 "method": "sock_set_default_impl", 00:06:02.255 "params": { 00:06:02.255 "impl_name": "posix" 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "sock_impl_set_options", 00:06:02.255 "params": { 00:06:02.255 "impl_name": "ssl", 00:06:02.255 "recv_buf_size": 4096, 00:06:02.255 "send_buf_size": 4096, 00:06:02.255 "enable_recv_pipe": true, 00:06:02.255 "enable_quickack": false, 00:06:02.255 "enable_placement_id": 0, 00:06:02.255 "enable_zerocopy_send_server": true, 00:06:02.255 "enable_zerocopy_send_client": false, 00:06:02.255 "zerocopy_threshold": 0, 00:06:02.255 "tls_version": 0, 00:06:02.255 "enable_ktls": false 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "sock_impl_set_options", 00:06:02.255 "params": { 00:06:02.255 "impl_name": "posix", 00:06:02.255 "recv_buf_size": 2097152, 00:06:02.255 "send_buf_size": 2097152, 00:06:02.255 "enable_recv_pipe": true, 00:06:02.255 "enable_quickack": false, 00:06:02.255 "enable_placement_id": 0, 00:06:02.255 "enable_zerocopy_send_server": true, 00:06:02.255 "enable_zerocopy_send_client": false, 00:06:02.255 "zerocopy_threshold": 0, 00:06:02.255 "tls_version": 0, 00:06:02.255 "enable_ktls": false 00:06:02.255 } 00:06:02.255 } 00:06:02.255 ] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "iobuf", 00:06:02.255 "config": [ 00:06:02.255 { 00:06:02.255 "method": "iobuf_set_options", 00:06:02.255 "params": { 00:06:02.255 "small_pool_count": 8192, 00:06:02.255 "large_pool_count": 1024, 00:06:02.255 "small_bufsize": 8192, 00:06:02.255 "large_bufsize": 135168 00:06:02.255 } 00:06:02.255 } 00:06:02.255 ] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "keyring", 00:06:02.255 "config": [] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "vfio_user_target", 00:06:02.255 "config": null 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "accel", 00:06:02.255 "config": [ 00:06:02.255 { 00:06:02.255 "method": "accel_set_options", 00:06:02.255 "params": { 00:06:02.255 "small_cache_size": 128, 00:06:02.255 "large_cache_size": 16, 00:06:02.255 "task_count": 2048, 00:06:02.255 "sequence_count": 2048, 00:06:02.255 "buf_count": 2048 00:06:02.255 } 00:06:02.255 } 00:06:02.255 ] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "bdev", 00:06:02.255 "config": [ 00:06:02.255 { 00:06:02.255 "method": "bdev_set_options", 00:06:02.255 "params": { 00:06:02.255 "bdev_io_pool_size": 65535, 00:06:02.255 "bdev_io_cache_size": 256, 00:06:02.255 "bdev_auto_examine": true, 00:06:02.255 "iobuf_small_cache_size": 128, 00:06:02.255 "iobuf_large_cache_size": 16 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "bdev_raid_set_options", 00:06:02.255 "params": { 00:06:02.255 "process_window_size_kb": 1024, 00:06:02.255 "process_max_bandwidth_mb_sec": 0 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "bdev_nvme_set_options", 00:06:02.255 "params": { 00:06:02.255 "action_on_timeout": "none", 00:06:02.255 "timeout_us": 0, 00:06:02.255 "timeout_admin_us": 0, 00:06:02.255 "keep_alive_timeout_ms": 10000, 00:06:02.255 "arbitration_burst": 0, 00:06:02.255 "low_priority_weight": 0, 00:06:02.255 "medium_priority_weight": 0, 00:06:02.255 "high_priority_weight": 0, 00:06:02.255 "nvme_adminq_poll_period_us": 10000, 00:06:02.255 "nvme_ioq_poll_period_us": 0, 00:06:02.255 "io_queue_requests": 0, 00:06:02.255 "delay_cmd_submit": true, 00:06:02.255 "transport_retry_count": 4, 00:06:02.255 "bdev_retry_count": 3, 00:06:02.255 "transport_ack_timeout": 0, 00:06:02.255 "ctrlr_loss_timeout_sec": 0, 00:06:02.255 "reconnect_delay_sec": 0, 00:06:02.255 "fast_io_fail_timeout_sec": 0, 00:06:02.255 "disable_auto_failback": false, 00:06:02.255 "generate_uuids": false, 00:06:02.255 "transport_tos": 0, 00:06:02.255 "nvme_error_stat": false, 00:06:02.255 "rdma_srq_size": 0, 00:06:02.255 "io_path_stat": false, 00:06:02.255 "allow_accel_sequence": false, 00:06:02.255 "rdma_max_cq_size": 0, 00:06:02.255 "rdma_cm_event_timeout_ms": 0, 00:06:02.255 "dhchap_digests": [ 00:06:02.255 "sha256", 00:06:02.255 "sha384", 00:06:02.255 "sha512" 00:06:02.255 ], 00:06:02.255 "dhchap_dhgroups": [ 00:06:02.255 "null", 00:06:02.255 "ffdhe2048", 00:06:02.255 "ffdhe3072", 00:06:02.255 "ffdhe4096", 00:06:02.255 "ffdhe6144", 00:06:02.255 "ffdhe8192" 00:06:02.255 ] 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "bdev_nvme_set_hotplug", 00:06:02.255 "params": { 00:06:02.255 "period_us": 100000, 00:06:02.255 "enable": false 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "bdev_iscsi_set_options", 00:06:02.255 "params": { 00:06:02.255 "timeout_sec": 30 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "bdev_wait_for_examine" 00:06:02.255 } 00:06:02.255 ] 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "subsystem": "nvmf", 00:06:02.255 "config": [ 00:06:02.255 { 00:06:02.255 "method": "nvmf_set_config", 00:06:02.255 "params": { 00:06:02.255 "discovery_filter": "match_any", 00:06:02.255 "admin_cmd_passthru": { 00:06:02.255 "identify_ctrlr": false 00:06:02.255 } 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "nvmf_set_max_subsystems", 00:06:02.255 "params": { 00:06:02.255 "max_subsystems": 1024 00:06:02.255 } 00:06:02.255 }, 00:06:02.255 { 00:06:02.255 "method": "nvmf_set_crdt", 00:06:02.255 "params": { 00:06:02.255 "crdt1": 0, 00:06:02.256 "crdt2": 0, 00:06:02.256 "crdt3": 0 00:06:02.256 } 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "method": "nvmf_create_transport", 00:06:02.256 "params": { 00:06:02.256 "trtype": "TCP", 00:06:02.256 "max_queue_depth": 128, 00:06:02.256 "max_io_qpairs_per_ctrlr": 127, 00:06:02.256 "in_capsule_data_size": 4096, 00:06:02.256 "max_io_size": 131072, 00:06:02.256 "io_unit_size": 131072, 00:06:02.256 "max_aq_depth": 128, 00:06:02.256 "num_shared_buffers": 511, 00:06:02.256 "buf_cache_size": 4294967295, 00:06:02.256 "dif_insert_or_strip": false, 00:06:02.256 "zcopy": false, 00:06:02.256 "c2h_success": true, 00:06:02.256 "sock_priority": 0, 00:06:02.256 "abort_timeout_sec": 1, 00:06:02.256 "ack_timeout": 0, 00:06:02.256 "data_wr_pool_size": 0 00:06:02.256 } 00:06:02.256 } 00:06:02.256 ] 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "subsystem": "nbd", 00:06:02.256 "config": [] 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "subsystem": "ublk", 00:06:02.256 "config": [] 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "subsystem": "vhost_blk", 00:06:02.256 "config": [] 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "subsystem": "scsi", 00:06:02.256 "config": null 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "subsystem": "iscsi", 00:06:02.256 "config": [ 00:06:02.256 { 00:06:02.256 "method": "iscsi_set_options", 00:06:02.256 "params": { 00:06:02.256 "node_base": "iqn.2016-06.io.spdk", 00:06:02.256 "max_sessions": 128, 00:06:02.256 "max_connections_per_session": 2, 00:06:02.256 "max_queue_depth": 64, 00:06:02.256 "default_time2wait": 2, 00:06:02.256 "default_time2retain": 20, 00:06:02.256 "first_burst_length": 8192, 00:06:02.256 "immediate_data": true, 00:06:02.256 "allow_duplicated_isid": false, 00:06:02.256 "error_recovery_level": 0, 00:06:02.256 "nop_timeout": 60, 00:06:02.256 "nop_in_interval": 30, 00:06:02.256 "disable_chap": false, 00:06:02.256 "require_chap": false, 00:06:02.256 "mutual_chap": false, 00:06:02.256 "chap_group": 0, 00:06:02.256 "max_large_datain_per_connection": 64, 00:06:02.256 "max_r2t_per_connection": 4, 00:06:02.256 "pdu_pool_size": 36864, 00:06:02.256 "immediate_data_pool_size": 16384, 00:06:02.256 "data_out_pool_size": 2048 00:06:02.256 } 00:06:02.256 } 00:06:02.256 ] 00:06:02.256 }, 00:06:02.256 { 00:06:02.256 "subsystem": "vhost_scsi", 00:06:02.256 "config": [] 00:06:02.256 } 00:06:02.256 ] 00:06:02.256 } 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 595999 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 595999 ']' 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 595999 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 595999 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 595999' 00:06:02.256 killing process with pid 595999 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 595999 00:06:02.256 08:19:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 595999 00:06:04.161 08:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=596434 00:06:04.161 08:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:04.161 08:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 596434 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 596434 ']' 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 596434 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 596434 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 596434' 00:06:09.431 killing process with pid 596434 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 596434 00:06:09.431 08:19:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 596434 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:10.366 00:06:10.366 real 0m9.168s 00:06:10.366 user 0m8.777s 00:06:10.366 sys 0m0.794s 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.366 ************************************ 00:06:10.366 END TEST skip_rpc_with_json 00:06:10.366 ************************************ 00:06:10.366 08:19:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.366 08:19:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:10.366 08:19:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.366 08:19:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.366 08:19:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.366 ************************************ 00:06:10.366 START TEST skip_rpc_with_delay 00:06:10.366 ************************************ 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.366 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:10.366 [2024-07-23 08:19:22.863506] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:10.366 [2024-07-23 08:19:22.863615] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:10.625 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:10.625 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.625 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.625 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.625 00:06:10.625 real 0m0.094s 00:06:10.625 user 0m0.052s 00:06:10.625 sys 0m0.042s 00:06:10.625 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.625 08:19:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:10.625 ************************************ 00:06:10.625 END TEST skip_rpc_with_delay 00:06:10.625 ************************************ 00:06:10.625 08:19:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.625 08:19:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.625 08:19:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.625 08:19:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.625 08:19:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.625 08:19:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.625 08:19:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.625 ************************************ 00:06:10.625 START TEST exit_on_failed_rpc_init 00:06:10.625 ************************************ 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=597554 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 597554 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 597554 ']' 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.625 08:19:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.626 [2024-07-23 08:19:23.028964] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:10.626 [2024-07-23 08:19:23.029043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597554 ] 00:06:10.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.890 [2024-07-23 08:19:23.149339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.890 [2024-07-23 08:19:23.307415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:11.458 08:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:11.458 [2024-07-23 08:19:23.936914] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:11.458 [2024-07-23 08:19:23.936991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597751 ] 00:06:11.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.717 [2024-07-23 08:19:24.051046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.717 [2024-07-23 08:19:24.216365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.717 [2024-07-23 08:19:24.216458] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:11.717 [2024-07-23 08:19:24.216473] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:11.717 [2024-07-23 08:19:24.216484] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.975 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:11.975 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.975 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:11.975 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:11.975 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:11.976 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 597554 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 597554 ']' 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 597554 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597554 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597554' 00:06:12.233 killing process with pid 597554 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 597554 00:06:12.233 08:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 597554 00:06:13.610 00:06:13.610 real 0m3.045s 00:06:13.610 user 0m3.434s 00:06:13.610 sys 0m0.564s 00:06:13.610 08:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.610 08:19:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.610 ************************************ 00:06:13.610 END TEST exit_on_failed_rpc_init 00:06:13.611 ************************************ 00:06:13.611 08:19:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.611 08:19:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:13.611 00:06:13.611 real 0m19.226s 00:06:13.611 user 0m18.622s 00:06:13.611 sys 0m2.003s 00:06:13.611 08:19:26 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.611 08:19:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.611 ************************************ 00:06:13.611 END TEST skip_rpc 00:06:13.611 ************************************ 00:06:13.611 08:19:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.611 08:19:26 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:13.611 08:19:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.611 08:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.611 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.870 ************************************ 00:06:13.870 START TEST rpc_client 00:06:13.870 ************************************ 00:06:13.870 08:19:26 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:13.870 * Looking for test storage... 00:06:13.870 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:13.870 08:19:26 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:13.870 OK 00:06:13.870 08:19:26 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:13.870 00:06:13.870 real 0m0.143s 00:06:13.870 user 0m0.064s 00:06:13.870 sys 0m0.086s 00:06:13.870 08:19:26 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.870 08:19:26 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:13.870 ************************************ 00:06:13.870 END TEST rpc_client 00:06:13.870 ************************************ 00:06:13.870 08:19:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.870 08:19:26 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:13.870 08:19:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.870 08:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.870 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:06:13.870 ************************************ 00:06:13.870 START TEST json_config 00:06:13.870 ************************************ 00:06:13.870 08:19:26 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:14.130 08:19:26 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.130 08:19:26 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.130 08:19:26 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.130 08:19:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.130 08:19:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.130 08:19:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.130 08:19:26 json_config -- paths/export.sh@5 -- # export PATH 00:06:14.130 08:19:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@47 -- # : 0 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.130 08:19:26 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:14.130 WARNING: No tests are enabled so not running JSON configuration tests 00:06:14.130 08:19:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:14.130 00:06:14.130 real 0m0.091s 00:06:14.130 user 0m0.052s 00:06:14.130 sys 0m0.039s 00:06:14.130 08:19:26 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.130 08:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.130 ************************************ 00:06:14.130 END TEST json_config 00:06:14.130 ************************************ 00:06:14.130 08:19:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.130 08:19:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.130 08:19:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.130 08:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.130 08:19:26 -- common/autotest_common.sh@10 -- # set +x 00:06:14.130 ************************************ 00:06:14.130 START TEST json_config_extra_key 00:06:14.130 ************************************ 00:06:14.130 08:19:26 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0055cbfc-b15d-ea11-906e-0017a4403562 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0055cbfc-b15d-ea11-906e-0017a4403562 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:14.131 08:19:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.131 08:19:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.131 08:19:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.131 08:19:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.131 08:19:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.131 08:19:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.131 08:19:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.131 08:19:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.131 08:19:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.131 INFO: launching applications... 00:06:14.131 08:19:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=598303 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.131 Waiting for target to run... 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 598303 /var/tmp/spdk_tgt.sock 00:06:14.131 08:19:26 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 598303 ']' 00:06:14.131 08:19:26 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.131 08:19:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.131 08:19:26 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.131 08:19:26 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.131 08:19:26 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.131 08:19:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.131 [2024-07-23 08:19:26.630981] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:14.131 [2024-07-23 08:19:26.631086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598303 ] 00:06:14.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.650 [2024-07-23 08:19:26.964530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.650 [2024-07-23 08:19:27.113084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.217 08:19:27 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.217 08:19:27 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:15.217 08:19:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.217 00:06:15.217 08:19:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.217 INFO: shutting down applications... 00:06:15.217 08:19:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.217 08:19:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 598303 ]] 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 598303 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 598303 00:06:15.218 08:19:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.785 08:19:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.785 08:19:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.785 08:19:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 598303 00:06:15.785 08:19:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.353 08:19:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.353 08:19:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.353 08:19:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 598303 00:06:16.353 08:19:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.612 08:19:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.612 08:19:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.612 08:19:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 598303 00:06:16.612 08:19:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 598303 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.181 08:19:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.181 SPDK target shutdown done 00:06:17.181 08:19:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:17.181 Success 00:06:17.181 00:06:17.181 real 0m3.075s 00:06:17.181 user 0m2.640s 00:06:17.181 sys 0m0.483s 00:06:17.181 08:19:29 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.181 08:19:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.181 ************************************ 00:06:17.181 END TEST json_config_extra_key 00:06:17.181 ************************************ 00:06:17.181 08:19:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.181 08:19:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.181 08:19:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.181 08:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.181 08:19:29 -- common/autotest_common.sh@10 -- # set +x 00:06:17.181 ************************************ 00:06:17.181 START TEST alias_rpc 00:06:17.181 ************************************ 00:06:17.181 08:19:29 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.440 * Looking for test storage... 00:06:17.440 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:17.440 08:19:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.440 08:19:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=598826 00:06:17.440 08:19:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 598826 00:06:17.440 08:19:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.440 08:19:29 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 598826 ']' 00:06:17.441 08:19:29 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.441 08:19:29 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.441 08:19:29 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.441 08:19:29 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.441 08:19:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.441 [2024-07-23 08:19:29.771862] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:17.441 [2024-07-23 08:19:29.771963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598826 ] 00:06:17.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.441 [2024-07-23 08:19:29.887394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.699 [2024-07-23 08:19:30.059804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.265 08:19:30 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.265 08:19:30 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:18.265 08:19:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:18.524 08:19:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 598826 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 598826 ']' 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 598826 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 598826 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 598826' 00:06:18.524 killing process with pid 598826 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@967 -- # kill 598826 00:06:18.524 08:19:30 alias_rpc -- common/autotest_common.sh@972 -- # wait 598826 00:06:19.903 00:06:19.903 real 0m2.750s 00:06:19.903 user 0m2.800s 00:06:19.903 sys 0m0.495s 00:06:19.903 08:19:32 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.903 08:19:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.903 ************************************ 00:06:19.903 END TEST alias_rpc 00:06:19.903 ************************************ 00:06:20.161 08:19:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.161 08:19:32 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:20.162 08:19:32 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.162 08:19:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.162 08:19:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.162 08:19:32 -- common/autotest_common.sh@10 -- # set +x 00:06:20.162 ************************************ 00:06:20.162 START TEST spdkcli_tcp 00:06:20.162 ************************************ 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.162 * Looking for test storage... 00:06:20.162 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=599313 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 599313 00:06:20.162 08:19:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 599313 ']' 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.162 08:19:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.162 [2024-07-23 08:19:32.598224] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:20.162 [2024-07-23 08:19:32.598309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599313 ] 00:06:20.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.420 [2024-07-23 08:19:32.714579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.420 [2024-07-23 08:19:32.881047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.420 [2024-07-23 08:19:32.881075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.986 08:19:33 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.986 08:19:33 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:20.986 08:19:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=599528 00:06:20.986 08:19:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:20.986 08:19:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.244 [ 00:06:21.244 "spdk_get_version", 00:06:21.244 "rpc_get_methods", 00:06:21.244 "trace_get_info", 00:06:21.244 "trace_get_tpoint_group_mask", 00:06:21.244 "trace_disable_tpoint_group", 00:06:21.244 "trace_enable_tpoint_group", 00:06:21.244 "trace_clear_tpoint_mask", 00:06:21.244 "trace_set_tpoint_mask", 00:06:21.244 "vfu_tgt_set_base_path", 00:06:21.244 "framework_get_pci_devices", 00:06:21.244 "framework_get_config", 00:06:21.244 "framework_get_subsystems", 00:06:21.244 "keyring_get_keys", 00:06:21.244 "iobuf_get_stats", 00:06:21.244 "iobuf_set_options", 00:06:21.244 "sock_get_default_impl", 00:06:21.244 "sock_set_default_impl", 00:06:21.244 "sock_impl_set_options", 00:06:21.244 "sock_impl_get_options", 00:06:21.244 "vmd_rescan", 00:06:21.244 "vmd_remove_device", 00:06:21.244 "vmd_enable", 00:06:21.244 "accel_get_stats", 00:06:21.244 "accel_set_options", 00:06:21.244 "accel_set_driver", 00:06:21.244 "accel_crypto_key_destroy", 00:06:21.244 "accel_crypto_keys_get", 00:06:21.244 "accel_crypto_key_create", 00:06:21.244 "accel_assign_opc", 00:06:21.244 "accel_get_module_info", 00:06:21.244 "accel_get_opc_assignments", 00:06:21.244 "notify_get_notifications", 00:06:21.244 "notify_get_types", 00:06:21.244 "bdev_get_histogram", 00:06:21.244 "bdev_enable_histogram", 00:06:21.244 "bdev_set_qos_limit", 00:06:21.244 "bdev_set_qd_sampling_period", 00:06:21.244 "bdev_get_bdevs", 00:06:21.244 "bdev_reset_iostat", 00:06:21.244 "bdev_get_iostat", 00:06:21.244 "bdev_examine", 00:06:21.244 "bdev_wait_for_examine", 00:06:21.244 "bdev_set_options", 00:06:21.244 "scsi_get_devices", 00:06:21.244 "thread_set_cpumask", 00:06:21.244 "framework_get_governor", 00:06:21.244 "framework_get_scheduler", 00:06:21.244 "framework_set_scheduler", 00:06:21.244 "framework_get_reactors", 00:06:21.244 "thread_get_io_channels", 00:06:21.244 "thread_get_pollers", 00:06:21.244 "thread_get_stats", 00:06:21.244 "framework_monitor_context_switch", 00:06:21.244 "spdk_kill_instance", 00:06:21.244 "log_enable_timestamps", 00:06:21.244 "log_get_flags", 00:06:21.244 "log_clear_flag", 00:06:21.244 "log_set_flag", 00:06:21.244 "log_get_level", 00:06:21.244 "log_set_level", 00:06:21.244 "log_get_print_level", 00:06:21.244 "log_set_print_level", 00:06:21.244 "framework_enable_cpumask_locks", 00:06:21.244 "framework_disable_cpumask_locks", 00:06:21.244 "framework_wait_init", 00:06:21.244 "framework_start_init", 00:06:21.244 "virtio_blk_create_transport", 00:06:21.244 "virtio_blk_get_transports", 00:06:21.244 "vhost_controller_set_coalescing", 00:06:21.244 "vhost_get_controllers", 00:06:21.244 "vhost_delete_controller", 00:06:21.244 "vhost_create_blk_controller", 00:06:21.244 "vhost_scsi_controller_remove_target", 00:06:21.244 "vhost_scsi_controller_add_target", 00:06:21.244 "vhost_start_scsi_controller", 00:06:21.244 "vhost_create_scsi_controller", 00:06:21.244 "ublk_recover_disk", 00:06:21.244 "ublk_get_disks", 00:06:21.244 "ublk_stop_disk", 00:06:21.244 "ublk_start_disk", 00:06:21.244 "ublk_destroy_target", 00:06:21.244 "ublk_create_target", 00:06:21.244 "nbd_get_disks", 00:06:21.244 "nbd_stop_disk", 00:06:21.244 "nbd_start_disk", 00:06:21.244 "env_dpdk_get_mem_stats", 00:06:21.244 "nvmf_stop_mdns_prr", 00:06:21.244 "nvmf_publish_mdns_prr", 00:06:21.244 "nvmf_subsystem_get_listeners", 00:06:21.244 "nvmf_subsystem_get_qpairs", 00:06:21.244 "nvmf_subsystem_get_controllers", 00:06:21.244 "nvmf_get_stats", 00:06:21.244 "nvmf_get_transports", 00:06:21.244 "nvmf_create_transport", 00:06:21.244 "nvmf_get_targets", 00:06:21.244 "nvmf_delete_target", 00:06:21.244 "nvmf_create_target", 00:06:21.244 "nvmf_subsystem_allow_any_host", 00:06:21.244 "nvmf_subsystem_remove_host", 00:06:21.244 "nvmf_subsystem_add_host", 00:06:21.244 "nvmf_ns_remove_host", 00:06:21.244 "nvmf_ns_add_host", 00:06:21.244 "nvmf_subsystem_remove_ns", 00:06:21.244 "nvmf_subsystem_add_ns", 00:06:21.244 "nvmf_subsystem_listener_set_ana_state", 00:06:21.244 "nvmf_discovery_get_referrals", 00:06:21.244 "nvmf_discovery_remove_referral", 00:06:21.244 "nvmf_discovery_add_referral", 00:06:21.244 "nvmf_subsystem_remove_listener", 00:06:21.244 "nvmf_subsystem_add_listener", 00:06:21.244 "nvmf_delete_subsystem", 00:06:21.244 "nvmf_create_subsystem", 00:06:21.244 "nvmf_get_subsystems", 00:06:21.244 "nvmf_set_crdt", 00:06:21.244 "nvmf_set_config", 00:06:21.244 "nvmf_set_max_subsystems", 00:06:21.244 "iscsi_get_histogram", 00:06:21.244 "iscsi_enable_histogram", 00:06:21.244 "iscsi_set_options", 00:06:21.244 "iscsi_get_auth_groups", 00:06:21.244 "iscsi_auth_group_remove_secret", 00:06:21.244 "iscsi_auth_group_add_secret", 00:06:21.244 "iscsi_delete_auth_group", 00:06:21.244 "iscsi_create_auth_group", 00:06:21.244 "iscsi_set_discovery_auth", 00:06:21.244 "iscsi_get_options", 00:06:21.244 "iscsi_target_node_request_logout", 00:06:21.244 "iscsi_target_node_set_redirect", 00:06:21.244 "iscsi_target_node_set_auth", 00:06:21.244 "iscsi_target_node_add_lun", 00:06:21.244 "iscsi_get_stats", 00:06:21.244 "iscsi_get_connections", 00:06:21.244 "iscsi_portal_group_set_auth", 00:06:21.244 "iscsi_start_portal_group", 00:06:21.244 "iscsi_delete_portal_group", 00:06:21.244 "iscsi_create_portal_group", 00:06:21.244 "iscsi_get_portal_groups", 00:06:21.244 "iscsi_delete_target_node", 00:06:21.244 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.244 "iscsi_target_node_add_pg_ig_maps", 00:06:21.244 "iscsi_create_target_node", 00:06:21.244 "iscsi_get_target_nodes", 00:06:21.244 "iscsi_delete_initiator_group", 00:06:21.244 "iscsi_initiator_group_remove_initiators", 00:06:21.244 "iscsi_initiator_group_add_initiators", 00:06:21.244 "iscsi_create_initiator_group", 00:06:21.244 "iscsi_get_initiator_groups", 00:06:21.244 "keyring_linux_set_options", 00:06:21.244 "keyring_file_remove_key", 00:06:21.244 "keyring_file_add_key", 00:06:21.244 "vfu_virtio_create_scsi_endpoint", 00:06:21.244 "vfu_virtio_scsi_remove_target", 00:06:21.244 "vfu_virtio_scsi_add_target", 00:06:21.244 "vfu_virtio_create_blk_endpoint", 00:06:21.244 "vfu_virtio_delete_endpoint", 00:06:21.244 "iaa_scan_accel_module", 00:06:21.244 "dsa_scan_accel_module", 00:06:21.244 "ioat_scan_accel_module", 00:06:21.244 "accel_error_inject_error", 00:06:21.244 "bdev_iscsi_delete", 00:06:21.244 "bdev_iscsi_create", 00:06:21.244 "bdev_iscsi_set_options", 00:06:21.244 "bdev_virtio_attach_controller", 00:06:21.244 "bdev_virtio_scsi_get_devices", 00:06:21.244 "bdev_virtio_detach_controller", 00:06:21.244 "bdev_virtio_blk_set_hotplug", 00:06:21.244 "bdev_ftl_set_property", 00:06:21.244 "bdev_ftl_get_properties", 00:06:21.244 "bdev_ftl_get_stats", 00:06:21.244 "bdev_ftl_unmap", 00:06:21.244 "bdev_ftl_unload", 00:06:21.244 "bdev_ftl_delete", 00:06:21.244 "bdev_ftl_load", 00:06:21.244 "bdev_ftl_create", 00:06:21.244 "bdev_aio_delete", 00:06:21.244 "bdev_aio_rescan", 00:06:21.244 "bdev_aio_create", 00:06:21.244 "blobfs_create", 00:06:21.244 "blobfs_detect", 00:06:21.244 "blobfs_set_cache_size", 00:06:21.244 "bdev_zone_block_delete", 00:06:21.244 "bdev_zone_block_create", 00:06:21.244 "bdev_delay_delete", 00:06:21.244 "bdev_delay_create", 00:06:21.244 "bdev_delay_update_latency", 00:06:21.244 "bdev_split_delete", 00:06:21.244 "bdev_split_create", 00:06:21.244 "bdev_error_inject_error", 00:06:21.244 "bdev_error_delete", 00:06:21.244 "bdev_error_create", 00:06:21.244 "bdev_raid_set_options", 00:06:21.244 "bdev_raid_remove_base_bdev", 00:06:21.244 "bdev_raid_add_base_bdev", 00:06:21.244 "bdev_raid_delete", 00:06:21.244 "bdev_raid_create", 00:06:21.244 "bdev_raid_get_bdevs", 00:06:21.244 "bdev_lvol_set_parent_bdev", 00:06:21.244 "bdev_lvol_set_parent", 00:06:21.244 "bdev_lvol_check_shallow_copy", 00:06:21.244 "bdev_lvol_start_shallow_copy", 00:06:21.244 "bdev_lvol_grow_lvstore", 00:06:21.244 "bdev_lvol_get_lvols", 00:06:21.244 "bdev_lvol_get_lvstores", 00:06:21.244 "bdev_lvol_delete", 00:06:21.244 "bdev_lvol_set_read_only", 00:06:21.244 "bdev_lvol_resize", 00:06:21.244 "bdev_lvol_decouple_parent", 00:06:21.244 "bdev_lvol_inflate", 00:06:21.244 "bdev_lvol_rename", 00:06:21.244 "bdev_lvol_clone_bdev", 00:06:21.244 "bdev_lvol_clone", 00:06:21.245 "bdev_lvol_snapshot", 00:06:21.245 "bdev_lvol_create", 00:06:21.245 "bdev_lvol_delete_lvstore", 00:06:21.245 "bdev_lvol_rename_lvstore", 00:06:21.245 "bdev_lvol_create_lvstore", 00:06:21.245 "bdev_passthru_delete", 00:06:21.245 "bdev_passthru_create", 00:06:21.245 "bdev_nvme_cuse_unregister", 00:06:21.245 "bdev_nvme_cuse_register", 00:06:21.245 "bdev_opal_new_user", 00:06:21.245 "bdev_opal_set_lock_state", 00:06:21.245 "bdev_opal_delete", 00:06:21.245 "bdev_opal_get_info", 00:06:21.245 "bdev_opal_create", 00:06:21.245 "bdev_nvme_opal_revert", 00:06:21.245 "bdev_nvme_opal_init", 00:06:21.245 "bdev_nvme_send_cmd", 00:06:21.245 "bdev_nvme_get_path_iostat", 00:06:21.245 "bdev_nvme_get_mdns_discovery_info", 00:06:21.245 "bdev_nvme_stop_mdns_discovery", 00:06:21.245 "bdev_nvme_start_mdns_discovery", 00:06:21.245 "bdev_nvme_set_multipath_policy", 00:06:21.245 "bdev_nvme_set_preferred_path", 00:06:21.245 "bdev_nvme_get_io_paths", 00:06:21.245 "bdev_nvme_remove_error_injection", 00:06:21.245 "bdev_nvme_add_error_injection", 00:06:21.245 "bdev_nvme_get_discovery_info", 00:06:21.245 "bdev_nvme_stop_discovery", 00:06:21.245 "bdev_nvme_start_discovery", 00:06:21.245 "bdev_nvme_get_controller_health_info", 00:06:21.245 "bdev_nvme_disable_controller", 00:06:21.245 "bdev_nvme_enable_controller", 00:06:21.245 "bdev_nvme_reset_controller", 00:06:21.245 "bdev_nvme_get_transport_statistics", 00:06:21.245 "bdev_nvme_apply_firmware", 00:06:21.245 "bdev_nvme_detach_controller", 00:06:21.245 "bdev_nvme_get_controllers", 00:06:21.245 "bdev_nvme_attach_controller", 00:06:21.245 "bdev_nvme_set_hotplug", 00:06:21.245 "bdev_nvme_set_options", 00:06:21.245 "bdev_null_resize", 00:06:21.245 "bdev_null_delete", 00:06:21.245 "bdev_null_create", 00:06:21.245 "bdev_malloc_delete", 00:06:21.245 "bdev_malloc_create" 00:06:21.245 ] 00:06:21.245 08:19:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.245 08:19:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.245 08:19:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 599313 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 599313 ']' 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 599313 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599313 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599313' 00:06:21.245 killing process with pid 599313 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 599313 00:06:21.245 08:19:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 599313 00:06:23.147 00:06:23.147 real 0m2.774s 00:06:23.147 user 0m4.899s 00:06:23.147 sys 0m0.531s 00:06:23.147 08:19:35 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.147 08:19:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 ************************************ 00:06:23.147 END TEST spdkcli_tcp 00:06:23.147 ************************************ 00:06:23.147 08:19:35 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.147 08:19:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.147 08:19:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.147 08:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.147 08:19:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 ************************************ 00:06:23.147 START TEST dpdk_mem_utility 00:06:23.147 ************************************ 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:23.147 * Looking for test storage... 00:06:23.147 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:23.147 08:19:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.147 08:19:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=599811 00:06:23.147 08:19:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 599811 00:06:23.147 08:19:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 599811 ']' 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.147 08:19:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.147 [2024-07-23 08:19:35.428299] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:23.147 [2024-07-23 08:19:35.428391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599811 ] 00:06:23.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.147 [2024-07-23 08:19:35.541525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.406 [2024-07-23 08:19:35.704131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.974 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.974 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:23.974 08:19:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.974 08:19:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.974 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.974 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.974 { 00:06:23.974 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.974 } 00:06:23.974 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.974 08:19:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:23.974 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:23.974 1 heaps totaling size 820.000000 MiB 00:06:23.974 size: 820.000000 MiB heap id: 0 00:06:23.974 end heaps---------- 00:06:23.974 8 mempools totaling size 598.116089 MiB 00:06:23.974 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.974 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.974 size: 84.521057 MiB name: bdev_io_599811 00:06:23.974 size: 51.011292 MiB name: evtpool_599811 00:06:23.974 size: 50.003479 MiB name: msgpool_599811 00:06:23.974 size: 21.763794 MiB name: PDU_Pool 00:06:23.974 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.974 size: 0.026123 MiB name: Session_Pool 00:06:23.974 end mempools------- 00:06:23.974 6 memzones totaling size 4.142822 MiB 00:06:23.975 size: 1.000366 MiB name: RG_ring_0_599811 00:06:23.975 size: 1.000366 MiB name: RG_ring_1_599811 00:06:23.975 size: 1.000366 MiB name: RG_ring_4_599811 00:06:23.975 size: 1.000366 MiB name: RG_ring_5_599811 00:06:23.975 size: 0.125366 MiB name: RG_ring_2_599811 00:06:23.975 size: 0.015991 MiB name: RG_ring_3_599811 00:06:23.975 end memzones------- 00:06:23.975 08:19:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.975 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:23.975 list of free elements. size: 18.514832 MiB 00:06:23.975 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:23.975 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:23.975 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:23.975 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:23.975 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:23.975 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:23.975 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:23.975 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:23.975 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:23.975 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:23.975 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:23.975 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:23.975 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:23.975 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:23.975 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:23.975 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:23.975 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:23.975 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:23.975 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:23.975 list of standard malloc elements. size: 199.220764 MiB 00:06:23.975 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:23.975 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:23.975 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:23.975 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:23.975 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:23.975 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:23.975 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:23.975 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:23.975 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:23.975 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:23.975 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:23.975 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:23.975 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:23.975 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:23.975 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:23.975 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:23.975 list of memzone associated elements. size: 602.264404 MiB 00:06:23.975 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:23.975 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.975 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:23.975 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.975 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:23.975 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_599811_0 00:06:23.975 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:23.975 associated memzone info: size: 48.002930 MiB name: MP_evtpool_599811_0 00:06:23.975 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:23.975 associated memzone info: size: 48.002930 MiB name: MP_msgpool_599811_0 00:06:23.975 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:23.975 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.975 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:23.975 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.975 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:23.975 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_599811 00:06:23.975 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:23.975 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_599811 00:06:23.975 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:23.975 associated memzone info: size: 1.007996 MiB name: MP_evtpool_599811 00:06:23.975 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:23.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.975 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:23.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.975 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:23.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.975 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:23.975 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.975 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:23.975 associated memzone info: size: 1.000366 MiB name: RG_ring_0_599811 00:06:23.975 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:23.975 associated memzone info: size: 1.000366 MiB name: RG_ring_1_599811 00:06:23.975 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:23.975 associated memzone info: size: 1.000366 MiB name: RG_ring_4_599811 00:06:23.975 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:23.975 associated memzone info: size: 1.000366 MiB name: RG_ring_5_599811 00:06:23.975 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:23.975 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_599811 00:06:23.975 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:23.975 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.975 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:23.975 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.975 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:23.975 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.975 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:23.975 associated memzone info: size: 0.125366 MiB name: RG_ring_2_599811 00:06:23.975 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:23.975 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.975 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:23.975 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.975 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:23.975 associated memzone info: size: 0.015991 MiB name: RG_ring_3_599811 00:06:23.975 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:23.975 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.975 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:23.975 associated memzone info: size: 0.000183 MiB name: MP_msgpool_599811 00:06:23.975 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:23.975 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_599811 00:06:23.975 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:23.975 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.975 08:19:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.975 08:19:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 599811 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 599811 ']' 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 599811 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599811 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.975 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.976 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599811' 00:06:23.976 killing process with pid 599811 00:06:23.976 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 599811 00:06:23.976 08:19:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 599811 00:06:25.944 00:06:25.944 real 0m2.638s 00:06:25.944 user 0m2.609s 00:06:25.944 sys 0m0.504s 00:06:25.944 08:19:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.944 08:19:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.944 ************************************ 00:06:25.944 END TEST dpdk_mem_utility 00:06:25.944 ************************************ 00:06:25.944 08:19:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.944 08:19:37 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:25.944 08:19:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.944 08:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.944 08:19:37 -- common/autotest_common.sh@10 -- # set +x 00:06:25.944 ************************************ 00:06:25.944 START TEST event 00:06:25.944 ************************************ 00:06:25.944 08:19:38 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:25.944 * Looking for test storage... 00:06:25.944 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:25.944 08:19:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:25.944 08:19:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.944 08:19:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.944 08:19:38 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.944 08:19:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.944 08:19:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.944 ************************************ 00:06:25.944 START TEST event_perf 00:06:25.944 ************************************ 00:06:25.944 08:19:38 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:25.944 Running I/O for 1 seconds...[2024-07-23 08:19:38.162477] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:25.944 [2024-07-23 08:19:38.162571] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600304 ] 00:06:25.944 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.944 [2024-07-23 08:19:38.275215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.944 [2024-07-23 08:19:38.440215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.944 [2024-07-23 08:19:38.440293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.944 [2024-07-23 08:19:38.440350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.944 [2024-07-23 08:19:38.440378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.372 Running I/O for 1 seconds... 00:06:27.372 lcore 0: 181264 00:06:27.372 lcore 1: 181264 00:06:27.372 lcore 2: 181264 00:06:27.372 lcore 3: 181265 00:06:27.372 done. 00:06:27.372 00:06:27.372 real 0m1.563s 00:06:27.372 user 0m4.419s 00:06:27.372 sys 0m0.139s 00:06:27.372 08:19:39 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.372 08:19:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.372 ************************************ 00:06:27.372 END TEST event_perf 00:06:27.372 ************************************ 00:06:27.372 08:19:39 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.372 08:19:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.372 08:19:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:27.372 08:19:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.372 08:19:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.372 ************************************ 00:06:27.372 START TEST event_reactor 00:06:27.372 ************************************ 00:06:27.372 08:19:39 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:27.372 [2024-07-23 08:19:39.787845] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:27.372 [2024-07-23 08:19:39.787934] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600548 ] 00:06:27.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.631 [2024-07-23 08:19:39.902418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.631 [2024-07-23 08:19:40.072984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.008 test_start 00:06:29.008 oneshot 00:06:29.008 tick 100 00:06:29.008 tick 100 00:06:29.008 tick 250 00:06:29.008 tick 100 00:06:29.008 tick 100 00:06:29.008 tick 100 00:06:29.008 tick 250 00:06:29.008 tick 500 00:06:29.008 tick 100 00:06:29.008 tick 100 00:06:29.008 tick 250 00:06:29.008 tick 100 00:06:29.008 tick 100 00:06:29.008 test_end 00:06:29.008 00:06:29.008 real 0m1.564s 00:06:29.008 user 0m1.415s 00:06:29.008 sys 0m0.142s 00:06:29.008 08:19:41 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.008 08:19:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:29.008 ************************************ 00:06:29.008 END TEST event_reactor 00:06:29.008 ************************************ 00:06:29.008 08:19:41 event -- common/autotest_common.sh@1142 -- # return 0 00:06:29.008 08:19:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.008 08:19:41 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.008 08:19:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.008 08:19:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.008 ************************************ 00:06:29.008 START TEST event_reactor_perf 00:06:29.008 ************************************ 00:06:29.008 08:19:41 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:29.008 [2024-07-23 08:19:41.423483] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:29.008 [2024-07-23 08:19:41.423578] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600873 ] 00:06:29.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.267 [2024-07-23 08:19:41.538542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.267 [2024-07-23 08:19:41.698873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.644 test_start 00:06:30.644 test_end 00:06:30.644 Performance: 703868 events per second 00:06:30.644 00:06:30.644 real 0m1.562s 00:06:30.644 user 0m1.422s 00:06:30.644 sys 0m0.133s 00:06:30.644 08:19:42 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.644 08:19:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.644 ************************************ 00:06:30.644 END TEST event_reactor_perf 00:06:30.644 ************************************ 00:06:30.644 08:19:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:30.644 08:19:42 event -- event/event.sh@49 -- # uname -s 00:06:30.644 08:19:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:30.644 08:19:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.644 08:19:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.644 08:19:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.644 08:19:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.644 ************************************ 00:06:30.644 START TEST event_scheduler 00:06:30.644 ************************************ 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:30.644 * Looking for test storage... 00:06:30.644 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:30.644 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:30.644 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=601249 00:06:30.644 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.644 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:30.644 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 601249 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 601249 ']' 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.644 08:19:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.644 [2024-07-23 08:19:43.152233] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:30.644 [2024-07-23 08:19:43.152320] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601249 ] 00:06:30.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.903 [2024-07-23 08:19:43.272619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.162 [2024-07-23 08:19:43.438146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.162 [2024-07-23 08:19:43.438214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.162 [2024-07-23 08:19:43.438262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.162 [2024-07-23 08:19:43.438224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.728 08:19:43 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.728 08:19:43 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:31.728 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:31.728 08:19:43 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.728 08:19:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.728 [2024-07-23 08:19:43.972552] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:31.728 [2024-07-23 08:19:43.972581] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:31.728 [2024-07-23 08:19:43.972599] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:31.728 [2024-07-23 08:19:43.972609] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:31.728 [2024-07-23 08:19:43.972617] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:31.728 08:19:43 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.729 08:19:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:31.729 08:19:43 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.729 08:19:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.729 [2024-07-23 08:19:44.200217] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:31.729 08:19:44 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.729 08:19:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:31.729 08:19:44 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.729 08:19:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.729 08:19:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.729 ************************************ 00:06:31.729 START TEST scheduler_create_thread 00:06:31.729 ************************************ 00:06:31.729 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:31.729 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:31.729 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.729 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.987 2 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.987 3 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.987 4 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:31.987 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 5 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 6 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 7 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 8 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 9 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 10 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.988 08:19:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.924 08:19:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.924 08:19:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.924 08:19:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.924 08:19:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.924 08:19:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 08:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.860 00:06:33.860 real 0m2.141s 00:06:33.860 user 0m0.020s 00:06:33.860 sys 0m0.008s 00:06:33.860 08:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.860 08:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 ************************************ 00:06:33.860 END TEST scheduler_create_thread 00:06:33.860 ************************************ 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:34.118 08:19:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:34.118 08:19:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 601249 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 601249 ']' 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 601249 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601249 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601249' 00:06:34.118 killing process with pid 601249 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 601249 00:06:34.118 08:19:46 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 601249 00:06:34.377 [2024-07-23 08:19:46.856750] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:35.315 00:06:35.315 real 0m4.642s 00:06:35.315 user 0m7.843s 00:06:35.315 sys 0m0.456s 00:06:35.315 08:19:47 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.315 08:19:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 END TEST event_scheduler 00:06:35.315 ************************************ 00:06:35.315 08:19:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.315 08:19:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.315 08:19:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.315 08:19:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.315 08:19:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.315 08:19:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 START TEST app_repeat 00:06:35.315 ************************************ 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=601950 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 601950' 00:06:35.315 Process app_repeat pid: 601950 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.315 spdk_app_start Round 0 00:06:35.315 08:19:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 601950 /var/tmp/spdk-nbd.sock 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 601950 ']' 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.315 08:19:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 [2024-07-23 08:19:47.786582] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:35.315 [2024-07-23 08:19:47.786689] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601950 ] 00:06:35.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.574 [2024-07-23 08:19:47.903419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.574 [2024-07-23 08:19:48.066107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.574 [2024-07-23 08:19:48.066126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.142 08:19:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.142 08:19:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:36.142 08:19:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.400 Malloc0 00:06:36.400 08:19:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.659 Malloc1 00:06:36.659 08:19:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.659 08:19:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.918 /dev/nbd0 00:06:36.918 08:19:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.918 08:19:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.918 1+0 records in 00:06:36.918 1+0 records out 00:06:36.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222421 s, 18.4 MB/s 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.918 08:19:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.918 08:19:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.918 08:19:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.918 08:19:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.918 /dev/nbd1 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.177 1+0 records in 00:06:37.177 1+0 records out 00:06:37.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212083 s, 19.3 MB/s 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.177 08:19:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.177 { 00:06:37.177 "nbd_device": "/dev/nbd0", 00:06:37.177 "bdev_name": "Malloc0" 00:06:37.177 }, 00:06:37.177 { 00:06:37.177 "nbd_device": "/dev/nbd1", 00:06:37.177 "bdev_name": "Malloc1" 00:06:37.177 } 00:06:37.177 ]' 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.177 { 00:06:37.177 "nbd_device": "/dev/nbd0", 00:06:37.177 "bdev_name": "Malloc0" 00:06:37.177 }, 00:06:37.177 { 00:06:37.177 "nbd_device": "/dev/nbd1", 00:06:37.177 "bdev_name": "Malloc1" 00:06:37.177 } 00:06:37.177 ]' 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.177 /dev/nbd1' 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.177 /dev/nbd1' 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.177 08:19:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.436 08:19:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.436 256+0 records in 00:06:37.436 256+0 records out 00:06:37.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00972128 s, 108 MB/s 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.437 256+0 records in 00:06:37.437 256+0 records out 00:06:37.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017682 s, 59.3 MB/s 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.437 256+0 records in 00:06:37.437 256+0 records out 00:06:37.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195861 s, 53.5 MB/s 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.437 08:19:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.696 08:19:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.696 08:19:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.696 08:19:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.696 08:19:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.696 08:19:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.696 08:19:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.955 08:19:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.955 08:19:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.213 08:19:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.150 [2024-07-23 08:19:51.554283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.408 [2024-07-23 08:19:51.707985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.408 [2024-07-23 08:19:51.707988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.408 [2024-07-23 08:19:51.843407] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.408 [2024-07-23 08:19:51.843459] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.310 08:19:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.310 08:19:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:41.310 spdk_app_start Round 1 00:06:41.310 08:19:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 601950 /var/tmp/spdk-nbd.sock 00:06:41.310 08:19:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 601950 ']' 00:06:41.310 08:19:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.310 08:19:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.310 08:19:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.310 08:19:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.310 08:19:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.569 08:19:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.569 08:19:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:41.569 08:19:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.569 Malloc0 00:06:41.828 08:19:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.828 Malloc1 00:06:41.828 08:19:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.828 08:19:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.086 /dev/nbd0 00:06:42.086 08:19:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.086 08:19:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.086 1+0 records in 00:06:42.086 1+0 records out 00:06:42.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164361 s, 24.9 MB/s 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:42.086 08:19:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:42.086 08:19:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.086 08:19:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.086 08:19:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.344 /dev/nbd1 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.345 1+0 records in 00:06:42.345 1+0 records out 00:06:42.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206074 s, 19.9 MB/s 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:42.345 08:19:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.345 08:19:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.603 08:19:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.603 { 00:06:42.603 "nbd_device": "/dev/nbd0", 00:06:42.603 "bdev_name": "Malloc0" 00:06:42.603 }, 00:06:42.603 { 00:06:42.603 "nbd_device": "/dev/nbd1", 00:06:42.603 "bdev_name": "Malloc1" 00:06:42.603 } 00:06:42.603 ]' 00:06:42.603 08:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.603 { 00:06:42.603 "nbd_device": "/dev/nbd0", 00:06:42.603 "bdev_name": "Malloc0" 00:06:42.603 }, 00:06:42.603 { 00:06:42.603 "nbd_device": "/dev/nbd1", 00:06:42.603 "bdev_name": "Malloc1" 00:06:42.603 } 00:06:42.603 ]' 00:06:42.603 08:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.603 08:19:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.604 /dev/nbd1' 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.604 /dev/nbd1' 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.604 256+0 records in 00:06:42.604 256+0 records out 00:06:42.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00346522 s, 303 MB/s 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.604 256+0 records in 00:06:42.604 256+0 records out 00:06:42.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169631 s, 61.8 MB/s 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.604 256+0 records in 00:06:42.604 256+0 records out 00:06:42.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196465 s, 53.4 MB/s 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.604 08:19:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.604 08:19:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.862 08:19:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.862 08:19:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.862 08:19:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.862 08:19:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.863 08:19:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.863 08:19:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.863 08:19:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.863 08:19:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.863 08:19:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.863 08:19:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.121 08:19:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.380 08:19:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.380 08:19:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.639 08:19:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.574 [2024-07-23 08:19:56.812607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.574 [2024-07-23 08:19:56.966172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.574 [2024-07-23 08:19:56.966189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.851 [2024-07-23 08:19:57.101597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.851 [2024-07-23 08:19:57.101643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.755 08:19:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:46.755 08:19:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:46.755 spdk_app_start Round 2 00:06:46.755 08:19:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 601950 /var/tmp/spdk-nbd.sock 00:06:46.755 08:19:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 601950 ']' 00:06:46.755 08:19:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.755 08:19:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.755 08:19:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.755 08:19:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.755 08:19:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.755 08:19:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.755 08:19:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:46.755 08:19:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.013 Malloc0 00:06:47.013 08:19:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:47.273 Malloc1 00:06:47.273 08:19:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.273 /dev/nbd0 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.273 1+0 records in 00:06:47.273 1+0 records out 00:06:47.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234096 s, 17.5 MB/s 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:47.273 08:19:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.273 08:19:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.532 /dev/nbd1 00:06:47.532 08:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.532 08:19:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:47.532 08:19:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:47.533 08:19:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.533 1+0 records in 00:06:47.533 1+0 records out 00:06:47.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022639 s, 18.1 MB/s 00:06:47.533 08:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:47.533 08:19:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:47.533 08:19:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:47.533 08:19:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:47.533 08:19:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:47.533 08:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.533 08:19:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.533 08:19:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.533 08:19:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.533 08:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.792 { 00:06:47.792 "nbd_device": "/dev/nbd0", 00:06:47.792 "bdev_name": "Malloc0" 00:06:47.792 }, 00:06:47.792 { 00:06:47.792 "nbd_device": "/dev/nbd1", 00:06:47.792 "bdev_name": "Malloc1" 00:06:47.792 } 00:06:47.792 ]' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.792 { 00:06:47.792 "nbd_device": "/dev/nbd0", 00:06:47.792 "bdev_name": "Malloc0" 00:06:47.792 }, 00:06:47.792 { 00:06:47.792 "nbd_device": "/dev/nbd1", 00:06:47.792 "bdev_name": "Malloc1" 00:06:47.792 } 00:06:47.792 ]' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.792 /dev/nbd1' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.792 /dev/nbd1' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.792 256+0 records in 00:06:47.792 256+0 records out 00:06:47.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103422 s, 101 MB/s 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.792 256+0 records in 00:06:47.792 256+0 records out 00:06:47.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169189 s, 62.0 MB/s 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.792 256+0 records in 00:06:47.792 256+0 records out 00:06:47.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019951 s, 52.6 MB/s 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.792 08:20:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.052 08:20:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.310 08:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.311 08:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.569 08:20:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.569 08:20:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.826 08:20:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.762 [2024-07-23 08:20:02.087754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.762 [2024-07-23 08:20:02.240458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.762 [2024-07-23 08:20:02.240462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.019 [2024-07-23 08:20:02.375040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.019 [2024-07-23 08:20:02.375094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.923 08:20:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 601950 /var/tmp/spdk-nbd.sock 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 601950 ']' 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.923 08:20:04 event.app_repeat -- event/event.sh@39 -- # killprocess 601950 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 601950 ']' 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 601950 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.923 08:20:04 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 601950 00:06:52.181 08:20:04 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.181 08:20:04 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.181 08:20:04 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 601950' 00:06:52.181 killing process with pid 601950 00:06:52.181 08:20:04 event.app_repeat -- common/autotest_common.sh@967 -- # kill 601950 00:06:52.181 08:20:04 event.app_repeat -- common/autotest_common.sh@972 -- # wait 601950 00:06:52.749 spdk_app_start is called in Round 0. 00:06:52.749 Shutdown signal received, stop current app iteration 00:06:52.749 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:52.749 spdk_app_start is called in Round 1. 00:06:52.749 Shutdown signal received, stop current app iteration 00:06:52.749 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:52.749 spdk_app_start is called in Round 2. 00:06:52.749 Shutdown signal received, stop current app iteration 00:06:52.749 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:06:52.749 spdk_app_start is called in Round 3. 00:06:52.749 Shutdown signal received, stop current app iteration 00:06:52.749 08:20:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:52.749 08:20:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:52.749 00:06:52.749 real 0m17.467s 00:06:52.749 user 0m36.246s 00:06:52.749 sys 0m2.642s 00:06:52.749 08:20:05 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.749 08:20:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.749 ************************************ 00:06:52.749 END TEST app_repeat 00:06:52.749 ************************************ 00:06:52.749 08:20:05 event -- common/autotest_common.sh@1142 -- # return 0 00:06:52.749 08:20:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:52.749 08:20:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:52.749 08:20:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.749 08:20:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.749 08:20:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.008 ************************************ 00:06:53.009 START TEST cpu_locks 00:06:53.009 ************************************ 00:06:53.009 08:20:05 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:53.009 * Looking for test storage... 00:06:53.009 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:53.009 08:20:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:53.009 08:20:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:53.009 08:20:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:53.009 08:20:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:53.009 08:20:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.009 08:20:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.009 08:20:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.009 ************************************ 00:06:53.009 START TEST default_locks 00:06:53.009 ************************************ 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=604956 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 604956 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 604956 ']' 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.009 08:20:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.009 [2024-07-23 08:20:05.445033] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:53.009 [2024-07-23 08:20:05.445121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604956 ] 00:06:53.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.268 [2024-07-23 08:20:05.561153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.268 [2024-07-23 08:20:05.724570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.835 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.835 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:53.835 08:20:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 604956 00:06:53.835 08:20:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 604956 00:06:53.835 08:20:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.402 lslocks: write error 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 604956 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 604956 ']' 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 604956 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 604956 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 604956' 00:06:54.402 killing process with pid 604956 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 604956 00:06:54.402 08:20:06 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 604956 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 604956 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 604956 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 604956 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 604956 ']' 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.306 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (604956) - No such process 00:06:56.306 ERROR: process (pid: 604956) is no longer running 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.306 00:06:56.306 real 0m2.952s 00:06:56.306 user 0m2.914s 00:06:56.306 sys 0m0.662s 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.306 08:20:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.306 ************************************ 00:06:56.306 END TEST default_locks 00:06:56.306 ************************************ 00:06:56.306 08:20:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.306 08:20:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:56.306 08:20:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.306 08:20:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.306 08:20:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.306 ************************************ 00:06:56.307 START TEST default_locks_via_rpc 00:06:56.307 ************************************ 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=605437 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 605437 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 605437 ']' 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.307 08:20:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.307 [2024-07-23 08:20:08.473526] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:56.307 [2024-07-23 08:20:08.473624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605437 ] 00:06:56.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.307 [2024-07-23 08:20:08.594848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.307 [2024-07-23 08:20:08.759735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 605437 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 605437 00:06:56.874 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 605437 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 605437 ']' 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 605437 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 605437 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 605437' 00:06:57.441 killing process with pid 605437 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 605437 00:06:57.441 08:20:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 605437 00:06:58.819 00:06:58.819 real 0m2.852s 00:06:58.819 user 0m2.819s 00:06:58.819 sys 0m0.639s 00:06:58.819 08:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.819 08:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.819 ************************************ 00:06:58.819 END TEST default_locks_via_rpc 00:06:58.819 ************************************ 00:06:58.819 08:20:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.819 08:20:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:58.819 08:20:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.819 08:20:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.819 08:20:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.088 ************************************ 00:06:59.088 START TEST non_locking_app_on_locked_coremask 00:06:59.088 ************************************ 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=606011 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 606011 /var/tmp/spdk.sock 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 606011 ']' 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.088 08:20:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.088 [2024-07-23 08:20:11.382507] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:59.088 [2024-07-23 08:20:11.382582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606011 ] 00:06:59.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.088 [2024-07-23 08:20:11.486946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.346 [2024-07-23 08:20:11.652528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=606088 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 606088 /var/tmp/spdk2.sock 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 606088 ']' 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.913 08:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.913 [2024-07-23 08:20:12.251012] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:59.913 [2024-07-23 08:20:12.251116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid606088 ] 00:06:59.913 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.913 [2024-07-23 08:20:12.387967] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.913 [2024-07-23 08:20:12.388010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.480 [2024-07-23 08:20:12.723080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.415 08:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.415 08:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.415 08:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 606011 00:07:01.415 08:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 606011 00:07:01.415 08:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.981 lslocks: write error 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 606011 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 606011 ']' 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 606011 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 606011 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 606011' 00:07:01.981 killing process with pid 606011 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 606011 00:07:01.981 08:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 606011 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 606088 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 606088 ']' 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 606088 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 606088 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 606088' 00:07:05.267 killing process with pid 606088 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 606088 00:07:05.267 08:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 606088 00:07:06.644 00:07:06.644 real 0m7.614s 00:07:06.644 user 0m7.654s 00:07:06.644 sys 0m1.137s 00:07:06.644 08:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.644 08:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 ************************************ 00:07:06.644 END TEST non_locking_app_on_locked_coremask 00:07:06.644 ************************************ 00:07:06.644 08:20:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.644 08:20:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:06.644 08:20:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.644 08:20:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.644 08:20:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 ************************************ 00:07:06.644 START TEST locking_app_on_unlocked_coremask 00:07:06.644 ************************************ 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=607195 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 607195 /var/tmp/spdk.sock 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 607195 ']' 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.644 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.644 [2024-07-23 08:20:19.061827] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:06.644 [2024-07-23 08:20:19.061907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607195 ] 00:07:06.644 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.903 [2024-07-23 08:20:19.179061] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.903 [2024-07-23 08:20:19.179127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.903 [2024-07-23 08:20:19.343785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=607406 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 607406 /var/tmp/spdk2.sock 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 607406 ']' 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.470 08:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.470 [2024-07-23 08:20:19.948112] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:07.470 [2024-07-23 08:20:19.948216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607406 ] 00:07:07.728 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.728 [2024-07-23 08:20:20.091420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.987 [2024-07-23 08:20:20.426832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.365 08:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.365 08:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.365 08:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 607406 00:07:09.365 08:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.365 08:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 607406 00:07:09.624 lslocks: write error 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 607195 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 607195 ']' 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 607195 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 607195 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 607195' 00:07:09.624 killing process with pid 607195 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 607195 00:07:09.624 08:20:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 607195 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 607406 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 607406 ']' 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 607406 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 607406 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 607406' 00:07:12.910 killing process with pid 607406 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 607406 00:07:12.910 08:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 607406 00:07:14.288 00:07:14.288 real 0m7.584s 00:07:14.288 user 0m7.652s 00:07:14.288 sys 0m1.126s 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.288 ************************************ 00:07:14.288 END TEST locking_app_on_unlocked_coremask 00:07:14.288 ************************************ 00:07:14.288 08:20:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.288 08:20:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.288 08:20:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.288 08:20:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.288 08:20:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.288 ************************************ 00:07:14.288 START TEST locking_app_on_locked_coremask 00:07:14.288 ************************************ 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=608506 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 608506 /var/tmp/spdk.sock 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 608506 ']' 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.288 08:20:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.288 [2024-07-23 08:20:26.709537] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:14.288 [2024-07-23 08:20:26.709612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608506 ] 00:07:14.288 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.554 [2024-07-23 08:20:26.826488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.554 [2024-07-23 08:20:26.990053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=608583 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 608583 /var/tmp/spdk2.sock 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 608583 /var/tmp/spdk2.sock 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 608583 /var/tmp/spdk2.sock 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 608583 ']' 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.121 08:20:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.121 [2024-07-23 08:20:27.592281] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:15.121 [2024-07-23 08:20:27.592370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608583 ] 00:07:15.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.380 [2024-07-23 08:20:27.733389] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 608506 has claimed it. 00:07:15.380 [2024-07-23 08:20:27.733438] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.947 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (608583) - No such process 00:07:15.947 ERROR: process (pid: 608583) is no longer running 00:07:15.947 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.947 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:15.947 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:15.948 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.948 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.948 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.948 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 608506 00:07:15.948 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 608506 00:07:15.948 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.207 lslocks: write error 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 608506 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 608506 ']' 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 608506 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 608506 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 608506' 00:07:16.207 killing process with pid 608506 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 608506 00:07:16.207 08:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 608506 00:07:17.586 00:07:17.586 real 0m3.336s 00:07:17.586 user 0m3.447s 00:07:17.586 sys 0m0.695s 00:07:17.586 08:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.586 08:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.586 ************************************ 00:07:17.586 END TEST locking_app_on_locked_coremask 00:07:17.586 ************************************ 00:07:17.586 08:20:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.586 08:20:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.586 08:20:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.586 08:20:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.586 08:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.586 ************************************ 00:07:17.586 START TEST locking_overlapped_coremask 00:07:17.586 ************************************ 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=608983 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 608983 /var/tmp/spdk.sock 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 608983 ']' 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.586 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.845 [2024-07-23 08:20:30.112766] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:17.845 [2024-07-23 08:20:30.112843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608983 ] 00:07:17.845 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.845 [2024-07-23 08:20:30.230405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.105 [2024-07-23 08:20:30.397878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.105 [2024-07-23 08:20:30.397927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.105 [2024-07-23 08:20:30.397955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=609197 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 609197 /var/tmp/spdk2.sock 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 609197 /var/tmp/spdk2.sock 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 609197 /var/tmp/spdk2.sock 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 609197 ']' 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.673 08:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.673 [2024-07-23 08:20:31.020751] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:18.673 [2024-07-23 08:20:31.020827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609197 ] 00:07:18.673 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.673 [2024-07-23 08:20:31.161770] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 608983 has claimed it. 00:07:18.673 [2024-07-23 08:20:31.161815] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.242 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (609197) - No such process 00:07:19.242 ERROR: process (pid: 609197) is no longer running 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 608983 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 608983 ']' 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 608983 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 608983 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 608983' 00:07:19.242 killing process with pid 608983 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 608983 00:07:19.242 08:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 608983 00:07:21.149 00:07:21.149 real 0m3.167s 00:07:21.149 user 0m8.359s 00:07:21.149 sys 0m0.558s 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 ************************************ 00:07:21.149 END TEST locking_overlapped_coremask 00:07:21.149 ************************************ 00:07:21.149 08:20:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:21.149 08:20:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:21.149 08:20:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.149 08:20:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.149 08:20:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 ************************************ 00:07:21.149 START TEST locking_overlapped_coremask_via_rpc 00:07:21.149 ************************************ 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=609648 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 609648 /var/tmp/spdk.sock 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 609648 ']' 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.149 08:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.149 [2024-07-23 08:20:33.348741] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:21.149 [2024-07-23 08:20:33.348821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609648 ] 00:07:21.149 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.149 [2024-07-23 08:20:33.466474] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.149 [2024-07-23 08:20:33.466508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.149 [2024-07-23 08:20:33.626606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.149 [2024-07-23 08:20:33.626658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.149 [2024-07-23 08:20:33.626681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=609671 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 609671 /var/tmp/spdk2.sock 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 609671 ']' 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.717 08:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.976 [2024-07-23 08:20:34.245252] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:21.976 [2024-07-23 08:20:34.245331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609671 ] 00:07:21.976 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.976 [2024-07-23 08:20:34.389703] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.976 [2024-07-23 08:20:34.389749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.234 [2024-07-23 08:20:34.728000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.234 [2024-07-23 08:20:34.731144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.234 [2024-07-23 08:20:34.731175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.608 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.609 [2024-07-23 08:20:35.890217] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 609648 has claimed it. 00:07:23.609 request: 00:07:23.609 { 00:07:23.609 "method": "framework_enable_cpumask_locks", 00:07:23.609 "req_id": 1 00:07:23.609 } 00:07:23.609 Got JSON-RPC error response 00:07:23.609 response: 00:07:23.609 { 00:07:23.609 "code": -32603, 00:07:23.609 "message": "Failed to claim CPU core: 2" 00:07:23.609 } 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 609648 /var/tmp/spdk.sock 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 609648 ']' 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.609 08:20:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 609671 /var/tmp/spdk2.sock 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 609671 ']' 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.609 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.867 00:07:23.867 real 0m2.956s 00:07:23.867 user 0m0.876s 00:07:23.867 sys 0m0.161s 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.867 08:20:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.867 ************************************ 00:07:23.867 END TEST locking_overlapped_coremask_via_rpc 00:07:23.867 ************************************ 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:23.867 08:20:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:23.867 08:20:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 609648 ]] 00:07:23.867 08:20:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 609648 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 609648 ']' 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 609648 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 609648 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 609648' 00:07:23.867 killing process with pid 609648 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 609648 00:07:23.867 08:20:36 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 609648 00:07:25.767 08:20:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 609671 ]] 00:07:25.767 08:20:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 609671 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 609671 ']' 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 609671 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 609671 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 609671' 00:07:25.767 killing process with pid 609671 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 609671 00:07:25.767 08:20:37 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 609671 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 609648 ]] 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 609648 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 609648 ']' 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 609648 00:07:27.144 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (609648) - No such process 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 609648 is not found' 00:07:27.144 Process with pid 609648 is not found 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 609671 ]] 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 609671 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 609671 ']' 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 609671 00:07:27.144 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (609671) - No such process 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 609671 is not found' 00:07:27.144 Process with pid 609671 is not found 00:07:27.144 08:20:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.144 00:07:27.144 real 0m34.250s 00:07:27.144 user 0m56.896s 00:07:27.144 sys 0m6.072s 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.144 08:20:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.144 ************************************ 00:07:27.144 END TEST cpu_locks 00:07:27.144 ************************************ 00:07:27.144 08:20:39 event -- common/autotest_common.sh@1142 -- # return 0 00:07:27.144 00:07:27.144 real 1m1.559s 00:07:27.144 user 1m48.438s 00:07:27.144 sys 0m9.936s 00:07:27.144 08:20:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.144 08:20:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.144 ************************************ 00:07:27.144 END TEST event 00:07:27.144 ************************************ 00:07:27.144 08:20:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.144 08:20:39 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:27.144 08:20:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.144 08:20:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.144 08:20:39 -- common/autotest_common.sh@10 -- # set +x 00:07:27.144 ************************************ 00:07:27.144 START TEST thread 00:07:27.144 ************************************ 00:07:27.144 08:20:39 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:27.403 * Looking for test storage... 00:07:27.403 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:27.403 08:20:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.403 08:20:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:27.403 08:20:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.403 08:20:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.403 ************************************ 00:07:27.403 START TEST thread_poller_perf 00:07:27.403 ************************************ 00:07:27.403 08:20:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.403 [2024-07-23 08:20:39.783745] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:27.403 [2024-07-23 08:20:39.783836] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid610649 ] 00:07:27.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.403 [2024-07-23 08:20:39.899423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.661 [2024-07-23 08:20:40.071674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.661 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:29.037 ====================================== 00:07:29.037 busy:2109861590 (cyc) 00:07:29.037 total_run_count: 781000 00:07:29.037 tsc_hz: 2100000000 (cyc) 00:07:29.037 ====================================== 00:07:29.037 poller_cost: 2701 (cyc), 1286 (nsec) 00:07:29.037 00:07:29.037 real 0m1.588s 00:07:29.037 user 0m1.448s 00:07:29.037 sys 0m0.134s 00:07:29.037 08:20:41 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.037 08:20:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.037 ************************************ 00:07:29.037 END TEST thread_poller_perf 00:07:29.037 ************************************ 00:07:29.037 08:20:41 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:29.037 08:20:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.037 08:20:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:29.037 08:20:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.037 08:20:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.037 ************************************ 00:07:29.037 START TEST thread_poller_perf 00:07:29.037 ************************************ 00:07:29.037 08:20:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:29.037 [2024-07-23 08:20:41.446337] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:29.037 [2024-07-23 08:20:41.446425] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611054 ] 00:07:29.037 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.295 [2024-07-23 08:20:41.563782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.295 [2024-07-23 08:20:41.722553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.295 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:30.671 ====================================== 00:07:30.672 busy:2101936430 (cyc) 00:07:30.672 total_run_count: 12128000 00:07:30.672 tsc_hz: 2100000000 (cyc) 00:07:30.672 ====================================== 00:07:30.672 poller_cost: 173 (cyc), 82 (nsec) 00:07:30.672 00:07:30.672 real 0m1.563s 00:07:30.672 user 0m1.427s 00:07:30.672 sys 0m0.129s 00:07:30.672 08:20:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.672 08:20:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.672 ************************************ 00:07:30.672 END TEST thread_poller_perf 00:07:30.672 ************************************ 00:07:30.672 08:20:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:30.672 08:20:43 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:30.672 08:20:43 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:30.672 08:20:43 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.672 08:20:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.672 08:20:43 thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.672 ************************************ 00:07:30.672 START TEST thread_spdk_lock 00:07:30.672 ************************************ 00:07:30.672 08:20:43 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:30.672 [2024-07-23 08:20:43.075457] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:30.672 [2024-07-23 08:20:43.075552] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611300 ] 00:07:30.672 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.672 [2024-07-23 08:20:43.189868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.929 [2024-07-23 08:20:43.350939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.929 [2024-07-23 08:20:43.350967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.495 [2024-07-23 08:20:43.842798] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:31.495 [2024-07-23 08:20:43.842840] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:31.495 [2024-07-23 08:20:43.842870] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x1be8f20 00:07:31.495 [2024-07-23 08:20:43.848723] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:31.495 [2024-07-23 08:20:43.848830] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:31.495 [2024-07-23 08:20:43.848851] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:31.754 Starting test contend 00:07:31.754 Worker Delay Wait us Hold us Total us 00:07:31.754 0 3 151599 187156 338755 00:07:31.754 1 5 74546 286913 361460 00:07:31.754 PASS test contend 00:07:31.754 Starting test hold_by_poller 00:07:31.754 PASS test hold_by_poller 00:07:31.754 Starting test hold_by_message 00:07:31.754 PASS test hold_by_message 00:07:31.754 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:31.754 100014 assertions passed 00:07:31.754 0 assertions failed 00:07:31.754 00:07:31.754 real 0m1.059s 00:07:31.754 user 0m1.409s 00:07:31.754 sys 0m0.143s 00:07:31.754 08:20:44 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.754 08:20:44 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:31.754 ************************************ 00:07:31.754 END TEST thread_spdk_lock 00:07:31.754 ************************************ 00:07:31.754 08:20:44 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:31.754 00:07:31.754 real 0m4.502s 00:07:31.754 user 0m4.388s 00:07:31.754 sys 0m0.615s 00:07:31.754 08:20:44 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.754 08:20:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.754 ************************************ 00:07:31.754 END TEST thread 00:07:31.754 ************************************ 00:07:31.754 08:20:44 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.754 08:20:44 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:31.754 08:20:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.754 08:20:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.754 08:20:44 -- common/autotest_common.sh@10 -- # set +x 00:07:31.754 ************************************ 00:07:31.754 START TEST accel 00:07:31.754 ************************************ 00:07:31.754 08:20:44 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:32.013 * Looking for test storage... 00:07:32.013 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:32.013 08:20:44 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:32.013 08:20:44 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:32.013 08:20:44 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:32.013 08:20:44 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=611571 00:07:32.013 08:20:44 accel -- accel/accel.sh@63 -- # waitforlisten 611571 00:07:32.013 08:20:44 accel -- common/autotest_common.sh@829 -- # '[' -z 611571 ']' 00:07:32.013 08:20:44 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.013 08:20:44 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:32.013 08:20:44 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.013 08:20:44 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:32.013 08:20:44 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.013 08:20:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.013 08:20:44 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.013 08:20:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.013 08:20:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.013 08:20:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.013 08:20:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.013 08:20:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.013 08:20:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:32.013 08:20:44 accel -- accel/accel.sh@41 -- # jq -r . 00:07:32.013 [2024-07-23 08:20:44.331124] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:32.013 [2024-07-23 08:20:44.331221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611571 ] 00:07:32.013 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.013 [2024-07-23 08:20:44.444122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.271 [2024-07-23 08:20:44.605344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@862 -- # return 0 00:07:32.839 08:20:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:32.839 08:20:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:32.839 08:20:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:32.839 08:20:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:32.839 08:20:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:32.839 08:20:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.839 08:20:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # IFS== 00:07:32.839 08:20:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:32.839 08:20:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:32.839 08:20:45 accel -- accel/accel.sh@75 -- # killprocess 611571 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@948 -- # '[' -z 611571 ']' 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@952 -- # kill -0 611571 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@953 -- # uname 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 611571 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 611571' 00:07:32.839 killing process with pid 611571 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@967 -- # kill 611571 00:07:32.839 08:20:45 accel -- common/autotest_common.sh@972 -- # wait 611571 00:07:34.228 08:20:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:34.228 08:20:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:34.228 08:20:46 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.228 08:20:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.228 08:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 08:20:46 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:34.487 08:20:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:34.487 08:20:46 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.487 08:20:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 08:20:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.487 08:20:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:34.487 08:20:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.487 08:20:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.487 08:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 ************************************ 00:07:34.487 START TEST accel_missing_filename 00:07:34.487 ************************************ 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.487 08:20:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:34.487 08:20:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:34.487 [2024-07-23 08:20:46.930232] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:34.487 [2024-07-23 08:20:46.930314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612032 ] 00:07:34.487 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.747 [2024-07-23 08:20:47.028846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.747 [2024-07-23 08:20:47.191537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.013 [2024-07-23 08:20:47.334467] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.290 [2024-07-23 08:20:47.666797] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:35.571 A filename is required. 00:07:35.571 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:35.571 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.572 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:35.572 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.572 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:35.572 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.572 00:07:35.572 real 0m1.029s 00:07:35.572 user 0m0.877s 00:07:35.572 sys 0m0.188s 00:07:35.572 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.572 08:20:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:35.572 ************************************ 00:07:35.572 END TEST accel_missing_filename 00:07:35.572 ************************************ 00:07:35.572 08:20:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.572 08:20:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:35.572 08:20:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:35.572 08:20:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.572 08:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.572 ************************************ 00:07:35.572 START TEST accel_compress_verify 00:07:35.572 ************************************ 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.572 08:20:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.572 08:20:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.572 [2024-07-23 08:20:48.026018] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:35.572 [2024-07-23 08:20:48.026120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612136 ] 00:07:35.861 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.861 [2024-07-23 08:20:48.141297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.861 [2024-07-23 08:20:48.303641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.141 [2024-07-23 08:20:48.444869] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.401 [2024-07-23 08:20:48.777920] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:36.661 00:07:36.661 Compression does not support the verify option, aborting. 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.661 00:07:36.661 real 0m1.039s 00:07:36.661 user 0m0.875s 00:07:36.661 sys 0m0.201s 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.661 08:20:49 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.661 ************************************ 00:07:36.661 END TEST accel_compress_verify 00:07:36.661 ************************************ 00:07:36.661 08:20:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.661 08:20:49 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:36.661 08:20:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:36.661 08:20:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.661 08:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.661 ************************************ 00:07:36.661 START TEST accel_wrong_workload 00:07:36.661 ************************************ 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:36.661 08:20:49 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:36.661 Unsupported workload type: foobar 00:07:36.661 [2024-07-23 08:20:49.131611] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:36.661 accel_perf options: 00:07:36.661 [-h help message] 00:07:36.661 [-q queue depth per core] 00:07:36.661 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:36.661 [-T number of threads per core 00:07:36.661 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:36.661 [-t time in seconds] 00:07:36.661 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:36.661 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:36.661 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:36.661 [-l for compress/decompress workloads, name of uncompressed input file 00:07:36.661 [-S for crc32c workload, use this seed value (default 0) 00:07:36.661 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:36.661 [-f for fill workload, use this BYTE value (default 255) 00:07:36.661 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:36.661 [-y verify result if this switch is on] 00:07:36.661 [-a tasks to allocate per core (default: same value as -q)] 00:07:36.661 Can be used to spread operations across a wider range of memory. 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.661 00:07:36.661 real 0m0.060s 00:07:36.661 user 0m0.069s 00:07:36.661 sys 0m0.031s 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.661 08:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:36.661 ************************************ 00:07:36.661 END TEST accel_wrong_workload 00:07:36.661 ************************************ 00:07:36.920 08:20:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.920 08:20:49 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:36.920 08:20:49 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:36.920 08:20:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.920 08:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.920 ************************************ 00:07:36.920 START TEST accel_negative_buffers 00:07:36.920 ************************************ 00:07:36.920 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:36.920 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:36.921 08:20:49 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:36.921 -x option must be non-negative. 00:07:36.921 [2024-07-23 08:20:49.261761] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:36.921 accel_perf options: 00:07:36.921 [-h help message] 00:07:36.921 [-q queue depth per core] 00:07:36.921 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:36.921 [-T number of threads per core 00:07:36.921 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:36.921 [-t time in seconds] 00:07:36.921 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:36.921 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:36.921 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:36.921 [-l for compress/decompress workloads, name of uncompressed input file 00:07:36.921 [-S for crc32c workload, use this seed value (default 0) 00:07:36.921 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:36.921 [-f for fill workload, use this BYTE value (default 255) 00:07:36.921 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:36.921 [-y verify result if this switch is on] 00:07:36.921 [-a tasks to allocate per core (default: same value as -q)] 00:07:36.921 Can be used to spread operations across a wider range of memory. 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.921 00:07:36.921 real 0m0.060s 00:07:36.921 user 0m0.072s 00:07:36.921 sys 0m0.032s 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.921 08:20:49 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:36.921 ************************************ 00:07:36.921 END TEST accel_negative_buffers 00:07:36.921 ************************************ 00:07:36.921 08:20:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.921 08:20:49 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:36.921 08:20:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:36.921 08:20:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.921 08:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.921 ************************************ 00:07:36.921 START TEST accel_crc32c 00:07:36.921 ************************************ 00:07:36.921 08:20:49 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:36.921 08:20:49 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:36.921 [2024-07-23 08:20:49.390207] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:36.921 [2024-07-23 08:20:49.390307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612420 ] 00:07:37.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.181 [2024-07-23 08:20:49.506839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.181 [2024-07-23 08:20:49.666800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 08:20:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.346 08:20:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.346 00:07:39.346 real 0m2.043s 00:07:39.346 user 0m1.858s 00:07:39.346 sys 0m0.199s 00:07:39.346 08:20:51 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.346 08:20:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:39.346 ************************************ 00:07:39.346 END TEST accel_crc32c 00:07:39.346 ************************************ 00:07:39.346 08:20:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.346 08:20:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:39.346 08:20:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:39.346 08:20:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.346 08:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.346 ************************************ 00:07:39.346 START TEST accel_crc32c_C2 00:07:39.346 ************************************ 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.346 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:39.346 [2024-07-23 08:20:51.500572] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:39.346 [2024-07-23 08:20:51.500664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612794 ] 00:07:39.346 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.346 [2024-07-23 08:20:51.614252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.346 [2024-07-23 08:20:51.772386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:39.605 08:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:40.982 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.241 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:41.241 08:20:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.241 00:07:41.241 real 0m2.035s 00:07:41.241 user 0m1.859s 00:07:41.241 sys 0m0.191s 00:07:41.241 08:20:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.241 08:20:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:41.241 ************************************ 00:07:41.241 END TEST accel_crc32c_C2 00:07:41.241 ************************************ 00:07:41.241 08:20:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.241 08:20:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:41.241 08:20:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:41.241 08:20:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.241 08:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.241 ************************************ 00:07:41.241 START TEST accel_copy 00:07:41.241 ************************************ 00:07:41.241 08:20:53 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:41.241 08:20:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:41.241 [2024-07-23 08:20:53.597733] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:41.241 [2024-07-23 08:20:53.597814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613180 ] 00:07:41.241 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.241 [2024-07-23 08:20:53.713910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.500 [2024-07-23 08:20:53.873567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.500 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.759 08:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:43.136 08:20:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.136 00:07:43.136 real 0m2.030s 00:07:43.136 user 0m1.857s 00:07:43.136 sys 0m0.187s 00:07:43.136 08:20:55 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.136 08:20:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 ************************************ 00:07:43.136 END TEST accel_copy 00:07:43.136 ************************************ 00:07:43.136 08:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.136 08:20:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.136 08:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:43.136 08:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.136 08:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.394 ************************************ 00:07:43.394 START TEST accel_fill 00:07:43.394 ************************************ 00:07:43.394 08:20:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.394 08:20:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:43.394 08:20:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:43.394 08:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.394 08:20:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.394 08:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:43.395 08:20:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:43.395 [2024-07-23 08:20:55.695507] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:43.395 [2024-07-23 08:20:55.695586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613472 ] 00:07:43.395 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.395 [2024-07-23 08:20:55.813567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.652 [2024-07-23 08:20:55.973211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.652 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.652 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.652 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.652 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.652 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.652 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:43.653 08:20:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:45.637 08:20:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.637 00:07:45.637 real 0m2.038s 00:07:45.637 user 0m1.871s 00:07:45.637 sys 0m0.181s 00:07:45.637 08:20:57 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.637 08:20:57 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:45.637 ************************************ 00:07:45.637 END TEST accel_fill 00:07:45.637 ************************************ 00:07:45.637 08:20:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.637 08:20:57 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:45.637 08:20:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:45.637 08:20:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.637 08:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.637 ************************************ 00:07:45.637 START TEST accel_copy_crc32c 00:07:45.637 ************************************ 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:45.637 08:20:57 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:45.637 [2024-07-23 08:20:57.801552] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:45.637 [2024-07-23 08:20:57.801646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613913 ] 00:07:45.637 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.637 [2024-07-23 08:20:57.914721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.637 [2024-07-23 08:20:58.076913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:45.896 08:20:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.274 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.532 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.532 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.532 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.533 00:07:47.533 real 0m2.033s 00:07:47.533 user 0m1.858s 00:07:47.533 sys 0m0.188s 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.533 08:20:59 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:47.533 ************************************ 00:07:47.533 END TEST accel_copy_crc32c 00:07:47.533 ************************************ 00:07:47.533 08:20:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.533 08:20:59 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:47.533 08:20:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:47.533 08:20:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.533 08:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.533 ************************************ 00:07:47.533 START TEST accel_copy_crc32c_C2 00:07:47.533 ************************************ 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:47.533 08:20:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:47.533 [2024-07-23 08:20:59.899337] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:47.533 [2024-07-23 08:20:59.899427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614156 ] 00:07:47.533 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.533 [2024-07-23 08:21:00.012385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.792 [2024-07-23 08:21:00.186753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:48.051 08:21:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.492 00:07:49.492 real 0m2.070s 00:07:49.492 user 0m1.889s 00:07:49.492 sys 0m0.194s 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.492 08:21:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:49.492 ************************************ 00:07:49.492 END TEST accel_copy_crc32c_C2 00:07:49.492 ************************************ 00:07:49.492 08:21:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.492 08:21:01 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:49.492 08:21:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:49.492 08:21:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.492 08:21:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.492 ************************************ 00:07:49.492 START TEST accel_dualcast 00:07:49.492 ************************************ 00:07:49.492 08:21:02 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:49.492 08:21:02 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:49.751 [2024-07-23 08:21:02.036636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:49.751 [2024-07-23 08:21:02.036730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614704 ] 00:07:49.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.751 [2024-07-23 08:21:02.153110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.008 [2024-07-23 08:21:02.318554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.008 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.009 08:21:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.911 08:21:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.911 08:21:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.911 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.911 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.911 08:21:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:51.912 08:21:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.912 00:07:51.912 real 0m2.062s 00:07:51.912 user 0m1.883s 00:07:51.912 sys 0m0.192s 00:07:51.912 08:21:04 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.912 08:21:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:51.912 ************************************ 00:07:51.912 END TEST accel_dualcast 00:07:51.912 ************************************ 00:07:51.912 08:21:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.912 08:21:04 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:51.912 08:21:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:51.912 08:21:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.912 08:21:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.912 ************************************ 00:07:51.912 START TEST accel_compare 00:07:51.912 ************************************ 00:07:51.912 08:21:04 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:51.912 08:21:04 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:51.912 [2024-07-23 08:21:04.168047] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:51.912 [2024-07-23 08:21:04.168153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614964 ] 00:07:51.912 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.912 [2024-07-23 08:21:04.287217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.171 [2024-07-23 08:21:04.453601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.171 08:21:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:54.077 08:21:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.077 00:07:54.077 real 0m2.084s 00:07:54.077 user 0m1.901s 00:07:54.077 sys 0m0.195s 00:07:54.077 08:21:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.077 08:21:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:54.077 ************************************ 00:07:54.077 END TEST accel_compare 00:07:54.077 ************************************ 00:07:54.077 08:21:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.077 08:21:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:54.077 08:21:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:54.077 08:21:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.077 08:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.077 ************************************ 00:07:54.077 START TEST accel_xor 00:07:54.077 ************************************ 00:07:54.077 08:21:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:54.077 08:21:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:54.078 08:21:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:54.078 [2024-07-23 08:21:06.312984] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:54.078 [2024-07-23 08:21:06.313072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615402 ] 00:07:54.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.078 [2024-07-23 08:21:06.427729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.078 [2024-07-23 08:21:06.592772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.342 08:21:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.272 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.273 00:07:56.273 real 0m2.068s 00:07:56.273 user 0m1.888s 00:07:56.273 sys 0m0.193s 00:07:56.273 08:21:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.273 08:21:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:56.273 ************************************ 00:07:56.273 END TEST accel_xor 00:07:56.273 ************************************ 00:07:56.273 08:21:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.273 08:21:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:56.273 08:21:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:56.273 08:21:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.273 08:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.273 ************************************ 00:07:56.273 START TEST accel_xor 00:07:56.273 ************************************ 00:07:56.273 08:21:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:56.273 08:21:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:56.273 [2024-07-23 08:21:08.447070] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:56.273 [2024-07-23 08:21:08.447173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615721 ] 00:07:56.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.273 [2024-07-23 08:21:08.560526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.273 [2024-07-23 08:21:08.725333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.532 08:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:58.436 08:21:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.436 00:07:58.436 real 0m2.072s 00:07:58.436 user 0m1.892s 00:07:58.436 sys 0m0.191s 00:07:58.436 08:21:10 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.436 08:21:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:58.436 ************************************ 00:07:58.436 END TEST accel_xor 00:07:58.436 ************************************ 00:07:58.436 08:21:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.436 08:21:10 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:58.436 08:21:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:58.436 08:21:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.436 08:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.436 ************************************ 00:07:58.436 START TEST accel_dif_verify 00:07:58.436 ************************************ 00:07:58.436 08:21:10 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:58.436 08:21:10 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:58.436 [2024-07-23 08:21:10.589124] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:58.436 [2024-07-23 08:21:10.589224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616285 ] 00:07:58.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.436 [2024-07-23 08:21:10.708361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.436 [2024-07-23 08:21:10.874258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:58.694 08:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:00.598 08:21:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.598 00:08:00.598 real 0m2.066s 00:08:00.598 user 0m1.883s 00:08:00.598 sys 0m0.197s 00:08:00.598 08:21:12 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.598 08:21:12 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:00.598 ************************************ 00:08:00.598 END TEST accel_dif_verify 00:08:00.598 ************************************ 00:08:00.598 08:21:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.598 08:21:12 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:00.598 08:21:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:00.598 08:21:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.598 08:21:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.598 ************************************ 00:08:00.598 START TEST accel_dif_generate 00:08:00.598 ************************************ 00:08:00.598 08:21:12 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:00.598 08:21:12 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:00.598 [2024-07-23 08:21:12.720248] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:00.598 [2024-07-23 08:21:12.720345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617039 ] 00:08:00.598 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.598 [2024-07-23 08:21:12.833105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.598 [2024-07-23 08:21:12.991396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:00.857 08:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:02.231 08:21:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.231 00:08:02.231 real 0m2.033s 00:08:02.231 user 0m1.858s 00:08:02.231 sys 0m0.192s 00:08:02.231 08:21:14 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.231 08:21:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:02.231 ************************************ 00:08:02.231 END TEST accel_dif_generate 00:08:02.231 ************************************ 00:08:02.489 08:21:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.489 08:21:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:02.489 08:21:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:02.489 08:21:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.489 08:21:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.489 ************************************ 00:08:02.489 START TEST accel_dif_generate_copy 00:08:02.489 ************************************ 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:02.489 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:02.490 08:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:02.490 [2024-07-23 08:21:14.820814] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:02.490 [2024-07-23 08:21:14.820907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617279 ] 00:08:02.490 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.490 [2024-07-23 08:21:14.934076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.748 [2024-07-23 08:21:15.095919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:02.748 08:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.649 00:08:04.649 real 0m2.053s 00:08:04.649 user 0m1.879s 00:08:04.649 sys 0m0.187s 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.649 08:21:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:04.649 ************************************ 00:08:04.649 END TEST accel_dif_generate_copy 00:08:04.649 ************************************ 00:08:04.649 08:21:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.649 08:21:16 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:04.649 08:21:16 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:04.649 08:21:16 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:04.649 08:21:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.649 08:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.649 ************************************ 00:08:04.649 START TEST accel_comp 00:08:04.649 ************************************ 00:08:04.649 08:21:16 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:04.649 08:21:16 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:04.649 [2024-07-23 08:21:16.941176] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:04.649 [2024-07-23 08:21:16.941269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617720 ] 00:08:04.649 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.649 [2024-07-23 08:21:17.055053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.908 [2024-07-23 08:21:17.213956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:04.908 08:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:06.815 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:06.816 08:21:18 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.816 00:08:06.816 real 0m2.039s 00:08:06.816 user 0m1.867s 00:08:06.816 sys 0m0.188s 00:08:06.816 08:21:18 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.816 08:21:18 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:06.816 ************************************ 00:08:06.816 END TEST accel_comp 00:08:06.816 ************************************ 00:08:06.816 08:21:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:06.816 08:21:18 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:06.816 08:21:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:06.816 08:21:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.816 08:21:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.816 ************************************ 00:08:06.816 START TEST accel_decomp 00:08:06.816 ************************************ 00:08:06.816 08:21:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:06.816 08:21:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:06.816 [2024-07-23 08:21:19.046155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:06.816 [2024-07-23 08:21:19.046236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617961 ] 00:08:06.816 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.816 [2024-07-23 08:21:19.161880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.816 [2024-07-23 08:21:19.324420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:07.075 08:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.976 08:21:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.976 00:08:08.976 real 0m2.082s 00:08:08.976 user 0m1.902s 00:08:08.976 sys 0m0.194s 00:08:08.976 08:21:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.976 08:21:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:08.976 ************************************ 00:08:08.976 END TEST accel_decomp 00:08:08.976 ************************************ 00:08:08.976 08:21:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.976 08:21:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:08.976 08:21:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:08.976 08:21:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.976 08:21:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.976 ************************************ 00:08:08.976 START TEST accel_decomp_full 00:08:08.976 ************************************ 00:08:08.976 08:21:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:08.976 08:21:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:08.976 [2024-07-23 08:21:21.190109] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:08.976 [2024-07-23 08:21:21.190188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618400 ] 00:08:08.976 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.976 [2024-07-23 08:21:21.306784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.976 [2024-07-23 08:21:21.466902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.235 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:09.236 08:21:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.137 08:21:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.137 00:08:11.137 real 0m2.055s 00:08:11.137 user 0m1.886s 00:08:11.137 sys 0m0.185s 00:08:11.137 08:21:23 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.137 08:21:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:11.137 ************************************ 00:08:11.137 END TEST accel_decomp_full 00:08:11.137 ************************************ 00:08:11.137 08:21:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.137 08:21:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.137 08:21:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:11.137 08:21:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.137 08:21:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.137 ************************************ 00:08:11.137 START TEST accel_decomp_mcore 00:08:11.137 ************************************ 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:11.137 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:11.137 [2024-07-23 08:21:23.309952] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:11.137 [2024-07-23 08:21:23.310033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618690 ] 00:08:11.137 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.137 [2024-07-23 08:21:23.426769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.137 [2024-07-23 08:21:23.590131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.137 [2024-07-23 08:21:23.590191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.137 [2024-07-23 08:21:23.590237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.137 [2024-07-23 08:21:23.590208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.394 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.395 08:21:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.293 00:08:13.293 real 0m2.092s 00:08:13.293 user 0m6.452s 00:08:13.293 sys 0m0.210s 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.293 08:21:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:13.293 ************************************ 00:08:13.293 END TEST accel_decomp_mcore 00:08:13.293 ************************************ 00:08:13.293 08:21:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:13.293 08:21:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.293 08:21:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:13.293 08:21:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.293 08:21:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.293 ************************************ 00:08:13.293 START TEST accel_decomp_full_mcore 00:08:13.293 ************************************ 00:08:13.293 08:21:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.293 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:13.293 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:13.293 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.293 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:13.294 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:13.294 [2024-07-23 08:21:25.469514] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:13.294 [2024-07-23 08:21:25.469607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619085 ] 00:08:13.294 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.294 [2024-07-23 08:21:25.583255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.294 [2024-07-23 08:21:25.745840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.294 [2024-07-23 08:21:25.745915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.294 [2024-07-23 08:21:25.745971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.294 [2024-07-23 08:21:25.745999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:13.552 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:13.553 08:21:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.454 00:08:15.454 real 0m2.123s 00:08:15.454 user 0m6.558s 00:08:15.454 sys 0m0.204s 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.454 08:21:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:15.454 ************************************ 00:08:15.454 END TEST accel_decomp_full_mcore 00:08:15.454 ************************************ 00:08:15.454 08:21:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.454 08:21:27 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.454 08:21:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:15.454 08:21:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.454 08:21:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.454 ************************************ 00:08:15.454 START TEST accel_decomp_mthread 00:08:15.454 ************************************ 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:15.454 08:21:27 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:15.454 [2024-07-23 08:21:27.659652] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:15.454 [2024-07-23 08:21:27.659753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619529 ] 00:08:15.454 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.454 [2024-07-23 08:21:27.774016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.454 [2024-07-23 08:21:27.932204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.714 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:15.715 08:21:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.621 00:08:17.621 real 0m2.041s 00:08:17.621 user 0m1.871s 00:08:17.621 sys 0m0.187s 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.621 08:21:29 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:17.621 ************************************ 00:08:17.621 END TEST accel_decomp_mthread 00:08:17.621 ************************************ 00:08:17.621 08:21:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.621 08:21:29 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:17.621 08:21:29 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:17.621 08:21:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.621 08:21:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.621 ************************************ 00:08:17.621 START TEST accel_decomp_full_mthread 00:08:17.621 ************************************ 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:17.621 08:21:29 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:17.621 [2024-07-23 08:21:29.764623] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:17.621 [2024-07-23 08:21:29.764716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619768 ] 00:08:17.621 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.621 [2024-07-23 08:21:29.877934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.621 [2024-07-23 08:21:30.047778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.881 08:21:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.787 00:08:19.787 real 0m2.088s 00:08:19.787 user 0m1.907s 00:08:19.787 sys 0m0.193s 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.787 08:21:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:19.787 ************************************ 00:08:19.787 END TEST accel_decomp_full_mthread 00:08:19.787 ************************************ 00:08:19.787 08:21:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.787 08:21:31 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:19.787 08:21:31 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:19.787 08:21:31 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:19.787 08:21:31 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:19.787 08:21:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.787 08:21:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.787 08:21:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.787 08:21:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.787 08:21:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.787 08:21:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.787 08:21:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.787 08:21:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:19.787 08:21:31 accel -- accel/accel.sh@41 -- # jq -r . 00:08:19.787 ************************************ 00:08:19.787 START TEST accel_dif_functional_tests 00:08:19.787 ************************************ 00:08:19.787 08:21:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:19.787 [2024-07-23 08:21:31.923438] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:19.787 [2024-07-23 08:21:31.923521] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620216 ] 00:08:19.787 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.787 [2024-07-23 08:21:32.038067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.788 [2024-07-23 08:21:32.198924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.788 [2024-07-23 08:21:32.198971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.788 [2024-07-23 08:21:32.198997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.047 00:08:20.047 00:08:20.047 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.047 http://cunit.sourceforge.net/ 00:08:20.047 00:08:20.047 00:08:20.047 Suite: accel_dif 00:08:20.047 Test: verify: DIF generated, GUARD check ...passed 00:08:20.047 Test: verify: DIF generated, APPTAG check ...passed 00:08:20.047 Test: verify: DIF generated, REFTAG check ...passed 00:08:20.047 Test: verify: DIF not generated, GUARD check ...[2024-07-23 08:21:32.425380] dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:20.047 passed 00:08:20.047 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 08:21:32.425461] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:20.047 passed 00:08:20.047 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 08:21:32.425501] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:20.047 passed 00:08:20.047 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:20.047 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 08:21:32.425565] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:20.047 passed 00:08:20.047 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:20.047 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:20.047 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:20.047 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 08:21:32.425703] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:20.047 passed 00:08:20.047 Test: verify copy: DIF generated, GUARD check ...passed 00:08:20.047 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:20.047 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:20.047 Test: verify copy: DIF not generated, GUARD check ...[2024-07-23 08:21:32.425879] dif.c: 863:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:20.047 passed 00:08:20.047 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 08:21:32.425920] dif.c: 878:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:20.047 passed 00:08:20.047 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 08:21:32.425964] dif.c: 813:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:20.047 passed 00:08:20.047 Test: generate copy: DIF generated, GUARD check ...passed 00:08:20.047 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:20.047 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:20.047 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:20.047 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:20.047 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:20.047 Test: generate copy: iovecs-len validate ...[2024-07-23 08:21:32.426239] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:20.047 passed 00:08:20.047 Test: generate copy: buffer alignment validate ...passed 00:08:20.047 00:08:20.047 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.047 suites 1 1 n/a 0 0 00:08:20.047 tests 26 26 26 0 0 00:08:20.047 asserts 115 115 115 0 n/a 00:08:20.047 00:08:20.047 Elapsed time = 0.003 seconds 00:08:20.984 00:08:20.984 real 0m1.331s 00:08:20.984 user 0m2.624s 00:08:20.984 sys 0m0.235s 00:08:20.984 08:21:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.984 08:21:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:20.984 ************************************ 00:08:20.984 END TEST accel_dif_functional_tests 00:08:20.984 ************************************ 00:08:20.984 08:21:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.984 00:08:20.984 real 0m49.046s 00:08:20.984 user 0m54.582s 00:08:20.984 sys 0m6.157s 00:08:20.984 08:21:33 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.984 08:21:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.984 ************************************ 00:08:20.984 END TEST accel 00:08:20.984 ************************************ 00:08:20.984 08:21:33 -- common/autotest_common.sh@1142 -- # return 0 00:08:20.985 08:21:33 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:20.985 08:21:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.985 08:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.985 08:21:33 -- common/autotest_common.sh@10 -- # set +x 00:08:20.985 ************************************ 00:08:20.985 START TEST accel_rpc 00:08:20.985 ************************************ 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:20.985 * Looking for test storage... 00:08:20.985 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:20.985 08:21:33 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:20.985 08:21:33 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=620488 00:08:20.985 08:21:33 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 620488 00:08:20.985 08:21:33 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 620488 ']' 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.985 08:21:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.985 [2024-07-23 08:21:33.452061] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:20.985 [2024-07-23 08:21:33.452163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620488 ] 00:08:21.244 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.244 [2024-07-23 08:21:33.571741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.244 [2024-07-23 08:21:33.734643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.813 08:21:34 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.813 08:21:34 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:21.813 08:21:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:21.813 08:21:34 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:21.813 08:21:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:21.813 08:21:34 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:21.813 08:21:34 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:21.813 08:21:34 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.813 08:21:34 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.813 08:21:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.813 ************************************ 00:08:21.813 START TEST accel_assign_opcode 00:08:21.813 ************************************ 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:21.813 [2024-07-23 08:21:34.292561] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:21.813 [2024-07-23 08:21:34.300551] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.813 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.381 software 00:08:22.381 00:08:22.381 real 0m0.606s 00:08:22.381 user 0m0.046s 00:08:22.381 sys 0m0.010s 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.381 08:21:34 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:22.381 ************************************ 00:08:22.381 END TEST accel_assign_opcode 00:08:22.381 ************************************ 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:22.640 08:21:34 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 620488 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 620488 ']' 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 620488 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620488 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620488' 00:08:22.640 killing process with pid 620488 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@967 -- # kill 620488 00:08:22.640 08:21:34 accel_rpc -- common/autotest_common.sh@972 -- # wait 620488 00:08:24.016 00:08:24.016 real 0m3.155s 00:08:24.016 user 0m3.100s 00:08:24.016 sys 0m0.533s 00:08:24.016 08:21:36 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.016 08:21:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.016 ************************************ 00:08:24.016 END TEST accel_rpc 00:08:24.016 ************************************ 00:08:24.017 08:21:36 -- common/autotest_common.sh@1142 -- # return 0 00:08:24.017 08:21:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:24.017 08:21:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:24.017 08:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.017 08:21:36 -- common/autotest_common.sh@10 -- # set +x 00:08:24.275 ************************************ 00:08:24.275 START TEST app_cmdline 00:08:24.275 ************************************ 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:24.276 * Looking for test storage... 00:08:24.276 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:24.276 08:21:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:24.276 08:21:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=620994 00:08:24.276 08:21:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 620994 00:08:24.276 08:21:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 620994 ']' 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.276 08:21:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:24.276 [2024-07-23 08:21:36.671132] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:24.276 [2024-07-23 08:21:36.671215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620994 ] 00:08:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.276 [2024-07-23 08:21:36.790251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.534 [2024-07-23 08:21:36.959075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.101 08:21:37 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.101 08:21:37 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:25.101 08:21:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:25.359 { 00:08:25.359 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:08:25.359 "fields": { 00:08:25.359 "major": 24, 00:08:25.359 "minor": 9, 00:08:25.359 "patch": 0, 00:08:25.359 "suffix": "-pre", 00:08:25.359 "commit": "f7b31b2b9" 00:08:25.359 } 00:08:25.359 } 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:25.359 08:21:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:25.359 08:21:37 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:25.617 request: 00:08:25.618 { 00:08:25.618 "method": "env_dpdk_get_mem_stats", 00:08:25.618 "req_id": 1 00:08:25.618 } 00:08:25.618 Got JSON-RPC error response 00:08:25.618 response: 00:08:25.618 { 00:08:25.618 "code": -32601, 00:08:25.618 "message": "Method not found" 00:08:25.618 } 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:25.618 08:21:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 620994 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 620994 ']' 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 620994 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 620994 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 620994' 00:08:25.618 killing process with pid 620994 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@967 -- # kill 620994 00:08:25.618 08:21:37 app_cmdline -- common/autotest_common.sh@972 -- # wait 620994 00:08:26.995 00:08:26.995 real 0m2.920s 00:08:26.995 user 0m3.149s 00:08:26.995 sys 0m0.534s 00:08:26.995 08:21:39 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.995 08:21:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.995 ************************************ 00:08:26.995 END TEST app_cmdline 00:08:26.995 ************************************ 00:08:26.995 08:21:39 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.995 08:21:39 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:26.995 08:21:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:26.995 08:21:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.995 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:08:27.254 ************************************ 00:08:27.254 START TEST version 00:08:27.254 ************************************ 00:08:27.254 08:21:39 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:27.254 * Looking for test storage... 00:08:27.254 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:27.254 08:21:39 version -- app/version.sh@17 -- # get_header_version major 00:08:27.254 08:21:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:27.254 08:21:39 version -- app/version.sh@14 -- # cut -f2 00:08:27.254 08:21:39 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.254 08:21:39 version -- app/version.sh@17 -- # major=24 00:08:27.254 08:21:39 version -- app/version.sh@18 -- # get_header_version minor 00:08:27.254 08:21:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:27.254 08:21:39 version -- app/version.sh@14 -- # cut -f2 00:08:27.254 08:21:39 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.254 08:21:39 version -- app/version.sh@18 -- # minor=9 00:08:27.254 08:21:39 version -- app/version.sh@19 -- # get_header_version patch 00:08:27.254 08:21:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:27.254 08:21:39 version -- app/version.sh@14 -- # cut -f2 00:08:27.254 08:21:39 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.255 08:21:39 version -- app/version.sh@19 -- # patch=0 00:08:27.255 08:21:39 version -- app/version.sh@20 -- # get_header_version suffix 00:08:27.255 08:21:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:27.255 08:21:39 version -- app/version.sh@14 -- # cut -f2 00:08:27.255 08:21:39 version -- app/version.sh@14 -- # tr -d '"' 00:08:27.255 08:21:39 version -- app/version.sh@20 -- # suffix=-pre 00:08:27.255 08:21:39 version -- app/version.sh@22 -- # version=24.9 00:08:27.255 08:21:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:27.255 08:21:39 version -- app/version.sh@28 -- # version=24.9rc0 00:08:27.255 08:21:39 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:27.255 08:21:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:27.255 08:21:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:27.255 08:21:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:27.255 00:08:27.255 real 0m0.150s 00:08:27.255 user 0m0.080s 00:08:27.255 sys 0m0.105s 00:08:27.255 08:21:39 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.255 08:21:39 version -- common/autotest_common.sh@10 -- # set +x 00:08:27.255 ************************************ 00:08:27.255 END TEST version 00:08:27.255 ************************************ 00:08:27.255 08:21:39 -- common/autotest_common.sh@1142 -- # return 0 00:08:27.255 08:21:39 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@198 -- # uname -s 00:08:27.255 08:21:39 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:27.255 08:21:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:27.255 08:21:39 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:27.255 08:21:39 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:27.255 08:21:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.255 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:08:27.255 08:21:39 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:08:27.255 08:21:39 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:08:27.255 08:21:39 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:08:27.255 08:21:39 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:08:27.255 08:21:39 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:27.255 08:21:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:27.255 08:21:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.255 08:21:39 -- common/autotest_common.sh@10 -- # set +x 00:08:27.515 ************************************ 00:08:27.515 START TEST llvm_fuzz 00:08:27.515 ************************************ 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:27.515 * Looking for test storage... 00:08:27.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@546 -- # fuzzers=() 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@546 -- # local fuzzers 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@548 -- # [[ -n '' ]] 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@551 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@555 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:27.515 08:21:39 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.515 08:21:39 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:27.515 ************************************ 00:08:27.515 START TEST nvmf_llvm_fuzz 00:08:27.515 ************************************ 00:08:27.515 08:21:39 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:27.515 * Looking for test storage... 00:08:27.515 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:27.515 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:27.516 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:27.516 #define SPDK_CONFIG_H 00:08:27.516 #define SPDK_CONFIG_APPS 1 00:08:27.516 #define SPDK_CONFIG_ARCH native 00:08:27.516 #define SPDK_CONFIG_ASAN 1 00:08:27.516 #undef SPDK_CONFIG_AVAHI 00:08:27.516 #undef SPDK_CONFIG_CET 00:08:27.516 #define SPDK_CONFIG_COVERAGE 1 00:08:27.516 #define SPDK_CONFIG_CROSS_PREFIX 00:08:27.516 #undef SPDK_CONFIG_CRYPTO 00:08:27.516 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:27.516 #undef SPDK_CONFIG_CUSTOMOCF 00:08:27.516 #undef SPDK_CONFIG_DAOS 00:08:27.516 #define SPDK_CONFIG_DAOS_DIR 00:08:27.516 #define SPDK_CONFIG_DEBUG 1 00:08:27.516 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:27.516 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:27.516 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:27.516 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:27.516 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:27.516 #undef SPDK_CONFIG_DPDK_UADK 00:08:27.516 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:27.516 #define SPDK_CONFIG_EXAMPLES 1 00:08:27.516 #undef SPDK_CONFIG_FC 00:08:27.516 #define SPDK_CONFIG_FC_PATH 00:08:27.516 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:27.516 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:27.516 #undef SPDK_CONFIG_FUSE 00:08:27.516 #define SPDK_CONFIG_FUZZER 1 00:08:27.516 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:27.516 #undef SPDK_CONFIG_GOLANG 00:08:27.516 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:27.516 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:27.516 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:27.516 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:27.516 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:27.516 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:27.516 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:27.516 #define SPDK_CONFIG_IDXD 1 00:08:27.516 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:27.516 #undef SPDK_CONFIG_IPSEC_MB 00:08:27.516 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:27.516 #define SPDK_CONFIG_ISAL 1 00:08:27.516 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:27.516 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:27.516 #define SPDK_CONFIG_LIBDIR 00:08:27.516 #undef SPDK_CONFIG_LTO 00:08:27.516 #define SPDK_CONFIG_MAX_LCORES 128 00:08:27.516 #define SPDK_CONFIG_NVME_CUSE 1 00:08:27.516 #undef SPDK_CONFIG_OCF 00:08:27.516 #define SPDK_CONFIG_OCF_PATH 00:08:27.516 #define SPDK_CONFIG_OPENSSL_PATH 00:08:27.516 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:27.516 #define SPDK_CONFIG_PGO_DIR 00:08:27.516 #undef SPDK_CONFIG_PGO_USE 00:08:27.516 #define SPDK_CONFIG_PREFIX /usr/local 00:08:27.516 #undef SPDK_CONFIG_RAID5F 00:08:27.516 #undef SPDK_CONFIG_RBD 00:08:27.516 #define SPDK_CONFIG_RDMA 1 00:08:27.516 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:27.516 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:27.516 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:27.516 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:27.516 #undef SPDK_CONFIG_SHARED 00:08:27.516 #undef SPDK_CONFIG_SMA 00:08:27.516 #define SPDK_CONFIG_TESTS 1 00:08:27.516 #undef SPDK_CONFIG_TSAN 00:08:27.517 #define SPDK_CONFIG_UBLK 1 00:08:27.517 #define SPDK_CONFIG_UBSAN 1 00:08:27.517 #undef SPDK_CONFIG_UNIT_TESTS 00:08:27.517 #undef SPDK_CONFIG_URING 00:08:27.517 #define SPDK_CONFIG_URING_PATH 00:08:27.517 #undef SPDK_CONFIG_URING_ZNS 00:08:27.517 #undef SPDK_CONFIG_USDT 00:08:27.517 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:27.517 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:27.517 #define SPDK_CONFIG_VFIO_USER 1 00:08:27.517 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:27.517 #define SPDK_CONFIG_VHOST 1 00:08:27.517 #define SPDK_CONFIG_VIRTIO 1 00:08:27.517 #undef SPDK_CONFIG_VTUNE 00:08:27.517 #define SPDK_CONFIG_VTUNE_DIR 00:08:27.517 #define SPDK_CONFIG_WERROR 1 00:08:27.517 #define SPDK_CONFIG_WPDK_DIR 00:08:27.517 #undef SPDK_CONFIG_XNVME 00:08:27.517 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:27.517 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:27.517 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:27.778 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 1 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:27.779 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j88 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 621784 ]] 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 621784 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.KPoM4k 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.KPoM4k/tests/nvmf /tmp/spdk.KPoM4k 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=948682752 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4335747072 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=53625696256 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742075904 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=8116379648 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866325504 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871035904 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342378496 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348416000 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6037504 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870609920 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871040000 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=430080 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174199808 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174203904 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:27.780 * Looking for test storage... 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:27.780 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=53625696256 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10330972160 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:27.781 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:27.781 08:21:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:08:27.781 [2024-07-23 08:21:40.207239] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:27.781 [2024-07-23 08:21:40.207341] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621829 ] 00:08:27.781 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.040 [2024-07-23 08:21:40.436414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.298 [2024-07-23 08:21:40.585564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.556 [2024-07-23 08:21:40.821606] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.556 [2024-07-23 08:21:40.837850] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:28.556 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.556 INFO: Seed: 3515893037 00:08:28.556 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:28.556 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:28.556 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:28.556 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.556 #2 INITED exec/s: 0 rss: 202Mb 00:08:28.556 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.556 This may also happen if the target rejected all inputs we tried so far 00:08:28.556 [2024-07-23 08:21:40.893738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.556 [2024-07-23 08:21:40.893781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.556 NEW_FUNC[1/697]: 0x54b3e0 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:28.556 NEW_FUNC[2/697]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.556 [2024-07-23 08:21:41.064162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.556 [2024-07-23 08:21:41.064215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.815 #6 NEW cov: 11961 ft: 11972 corp: 2/74b lim: 320 exec/s: 0 rss: 218Mb L: 73/73 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:08:28.815 [2024-07-23 08:21:41.138647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.138684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.815 [2024-07-23 08:21:41.138745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.138762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.815 NEW_FUNC[1/2]: 0x168ad10 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2093 00:08:28.815 NEW_FUNC[2/2]: 0x1e87eb0 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:898 00:08:28.815 [2024-07-23 08:21:41.198576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.198607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.815 [2024-07-23 08:21:41.198668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.198686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.815 #23 NEW cov: 12038 ft: 12689 corp: 3/211b lim: 320 exec/s: 0 rss: 220Mb L: 137/137 MS: 1 CopyPart- 00:08:28.815 [2024-07-23 08:21:41.271124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.271157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.815 [2024-07-23 08:21:41.271207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.271221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:28.815 [2024-07-23 08:21:41.331216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.331247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:28.815 [2024-07-23 08:21:41.331314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:28.815 [2024-07-23 08:21:41.331333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.074 #25 NEW cov: 12050 ft: 12766 corp: 4/348b lim: 320 exec/s: 0 rss: 222Mb L: 137/137 MS: 1 ShuffleBytes- 00:08:29.074 [2024-07-23 08:21:41.402621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.075 [2024-07-23 08:21:41.402656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.402718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.075 [2024-07-23 08:21:41.402733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.442644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.075 [2024-07-23 08:21:41.442674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.442750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.075 [2024-07-23 08:21:41.442765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.075 #27 NEW cov: 12136 ft: 13248 corp: 5/486b lim: 320 exec/s: 0 rss: 224Mb L: 138/138 MS: 1 InsertByte- 00:08:29.075 [2024-07-23 08:21:41.503863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x737373737373 00:08:29.075 [2024-07-23 08:21:41.503900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.503959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:73737373 cdw11:73737373 00:08:29.075 [2024-07-23 08:21:41.503975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.504040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:6 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.075 [2024-07-23 08:21:41.504056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.543524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x737373737373 00:08:29.075 [2024-07-23 08:21:41.543553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.543605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:73737373 cdw11:73737373 00:08:29.075 [2024-07-23 08:21:41.543619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.075 [2024-07-23 08:21:41.543678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:6 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.075 [2024-07-23 08:21:41.543692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:29.075 #29 NEW cov: 12147 ft: 13715 corp: 6/692b lim: 320 exec/s: 0 rss: 226Mb L: 206/206 MS: 1 InsertRepeatedBytes- 00:08:29.333 [2024-07-23 08:21:41.612973] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.333 [2024-07-23 08:21:41.613005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.333 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:29.333 [2024-07-23 08:21:41.652723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.333 [2024-07-23 08:21:41.652753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.333 #31 NEW cov: 12164 ft: 13834 corp: 7/766b lim: 320 exec/s: 0 rss: 226Mb L: 74/206 MS: 1 InsertByte- 00:08:29.333 [2024-07-23 08:21:41.724479] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.333 [2024-07-23 08:21:41.724510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.333 [2024-07-23 08:21:41.764228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.334 [2024-07-23 08:21:41.764258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.334 #33 NEW cov: 12164 ft: 13898 corp: 8/870b lim: 320 exec/s: 0 rss: 228Mb L: 104/206 MS: 1 CrossOver- 00:08:29.334 [2024-07-23 08:21:41.810811] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.334 [2024-07-23 08:21:41.810852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.334 [2024-07-23 08:21:41.810930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.334 [2024-07-23 08:21:41.810955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.334 [2024-07-23 08:21:41.850433] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.334 [2024-07-23 08:21:41.850463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.334 [2024-07-23 08:21:41.850523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.334 [2024-07-23 08:21:41.850538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.593 #35 NEW cov: 12164 ft: 14040 corp: 9/1007b lim: 320 exec/s: 35 rss: 229Mb L: 137/206 MS: 1 ShuffleBytes- 00:08:29.593 [2024-07-23 08:21:41.906119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.593 [2024-07-23 08:21:41.906150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.593 [2024-07-23 08:21:41.955789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.593 [2024-07-23 08:21:41.955818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.593 #37 NEW cov: 12164 ft: 14121 corp: 10/1081b lim: 320 exec/s: 37 rss: 230Mb L: 74/206 MS: 1 ChangeByte- 00:08:29.593 [2024-07-23 08:21:42.006619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.593 [2024-07-23 08:21:42.006664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.593 [2024-07-23 08:21:42.045985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.593 [2024-07-23 08:21:42.046016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.593 #39 NEW cov: 12164 ft: 14166 corp: 11/1154b lim: 320 exec/s: 39 rss: 232Mb L: 73/206 MS: 1 ChangeByte- 00:08:29.593 [2024-07-23 08:21:42.095562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.593 [2024-07-23 08:21:42.095604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.593 [2024-07-23 08:21:42.095681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.593 [2024-07-23 08:21:42.095702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.852 [2024-07-23 08:21:42.135126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.135156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.852 [2024-07-23 08:21:42.135232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.135247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.852 #41 NEW cov: 12164 ft: 14216 corp: 12/1291b lim: 320 exec/s: 41 rss: 232Mb L: 137/206 MS: 1 ChangeBit- 00:08:29.852 [2024-07-23 08:21:42.187786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.187829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.852 [2024-07-23 08:21:42.247391] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.247421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.852 #43 NEW cov: 12164 ft: 14229 corp: 13/1395b lim: 320 exec/s: 43 rss: 234Mb L: 104/206 MS: 1 ChangeBit- 00:08:29.852 [2024-07-23 08:21:42.306424] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.306462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.852 [2024-07-23 08:21:42.306530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.306548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:29.852 [2024-07-23 08:21:42.366192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.366222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:29.852 [2024-07-23 08:21:42.366283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:29.852 [2024-07-23 08:21:42.366298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.110 #45 NEW cov: 12164 ft: 14262 corp: 14/1533b lim: 320 exec/s: 45 rss: 236Mb L: 138/206 MS: 1 ChangeBit- 00:08:30.110 [2024-07-23 08:21:42.427137] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.427174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.110 [2024-07-23 08:21:42.427246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.427264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.110 [2024-07-23 08:21:42.466737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.466769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.110 [2024-07-23 08:21:42.466828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.466843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.110 #57 NEW cov: 12164 ft: 14329 corp: 15/1671b lim: 320 exec/s: 57 rss: 237Mb L: 138/206 MS: 1 CrossOver- 00:08:30.110 [2024-07-23 08:21:42.523232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.523277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.110 [2024-07-23 08:21:42.523365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:64707373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.523389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.110 [2024-07-23 08:21:42.562633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.562663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.110 [2024-07-23 08:21:42.562742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:64707373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.110 [2024-07-23 08:21:42.562757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.110 #59 NEW cov: 12164 ft: 14346 corp: 16/1809b lim: 320 exec/s: 59 rss: 238Mb L: 138/206 MS: 1 CMP- DE: "spdk_nvme_driver"- 00:08:30.370 [2024-07-23 08:21:42.633621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.633662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.633736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.633756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.633833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:6 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.633852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.693337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.693369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.693432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.693447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.693508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:6 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.693521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:30.370 #61 NEW cov: 12164 ft: 14416 corp: 17/2015b lim: 320 exec/s: 61 rss: 239Mb L: 206/206 MS: 1 CrossOver- 00:08:30.370 [2024-07-23 08:21:42.744940] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.744983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.745069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:64707373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.745094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.804503] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.804533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.804592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:64707373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.804607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.370 #63 NEW cov: 12164 ft: 14450 corp: 18/2153b lim: 320 exec/s: 63 rss: 241Mb L: 138/206 MS: 1 ChangeByte- 00:08:30.370 [2024-07-23 08:21:42.863651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.863689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.370 [2024-07-23 08:21:42.863762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.370 [2024-07-23 08:21:42.863781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.629 #64 pulse cov: 12164 ft: 14532 corp: 18/2153b lim: 320 exec/s: 32 rss: 242Mb 00:08:30.629 [2024-07-23 08:21:42.923153] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.629 [2024-07-23 08:21:42.923183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:30.629 [2024-07-23 08:21:42.923257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (73) qid:0 cid:5 nsid:73737373 cdw10:73737373 cdw11:73737373 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7373737373737373 00:08:30.629 [2024-07-23 08:21:42.923273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:30.629 #65 NEW cov: 12164 ft: 14532 corp: 19/2292b lim: 320 exec/s: 32 rss: 242Mb L: 139/206 MS: 1 InsertByte- 00:08:30.629 #65 DONE cov: 12164 ft: 14532 corp: 19/2292b lim: 320 exec/s: 32 rss: 242Mb 00:08:30.629 ###### Recommended dictionary. ###### 00:08:30.629 "spdk_nvme_driver" # Uses: 0 00:08:30.629 ###### End of recommended dictionary. ###### 00:08:30.629 Done 65 runs in 2 second(s) 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.888 08:21:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:08:30.888 [2024-07-23 08:21:43.406501] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:30.888 [2024-07-23 08:21:43.406591] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622287 ] 00:08:31.147 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.147 [2024-07-23 08:21:43.634400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.406 [2024-07-23 08:21:43.786738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.665 [2024-07-23 08:21:44.025209] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.665 [2024-07-23 08:21:44.041453] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:31.665 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.665 INFO: Seed: 2425949356 00:08:31.665 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:31.665 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:31.665 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:31.665 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.665 #2 INITED exec/s: 0 rss: 202Mb 00:08:31.665 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.665 This may also happen if the target rejected all inputs we tried so far 00:08:31.665 [2024-07-23 08:21:44.097078] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:31.665 [2024-07-23 08:21:44.097243] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:31.665 [2024-07-23 08:21:44.097485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.665 [2024-07-23 08:21:44.097522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.665 [2024-07-23 08:21:44.097588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.665 [2024-07-23 08:21:44.097604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.925 NEW_FUNC[1/699]: 0x54be40 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:31.925 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.925 [2024-07-23 08:21:44.278045] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:31.925 [2024-07-23 08:21:44.278225] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:31.925 [2024-07-23 08:21:44.278499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.278558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.278644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.278669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.925 #22 NEW cov: 12048 ft: 12055 corp: 2/15b lim: 30 exec/s: 0 rss: 219Mb L: 14/14 MS: 4 CopyPart-ChangeByte-EraseBytes-InsertRepeatedBytes- 00:08:31.925 [2024-07-23 08:21:44.352850] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.353026] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.353179] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.353324] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.353606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.353645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.353718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.353741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.353811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.353831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.353898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.353917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.392528] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.392657] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.392781] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.392880] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:31.925 [2024-07-23 08:21:44.393142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.393171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.393228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.393244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.393303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.393317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:31.925 [2024-07-23 08:21:44.393378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.925 [2024-07-23 08:21:44.393395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:31.925 #25 NEW cov: 12078 ft: 13182 corp: 3/39b lim: 30 exec/s: 0 rss: 220Mb L: 24/24 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:32.184 [2024-07-23 08:21:44.460649] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.184 [2024-07-23 08:21:44.460790] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.184 [2024-07-23 08:21:44.461031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.184 [2024-07-23 08:21:44.461060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.184 [2024-07-23 08:21:44.461120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.184 [2024-07-23 08:21:44.461135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.184 [2024-07-23 08:21:44.520623] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.184 [2024-07-23 08:21:44.520760] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.184 [2024-07-23 08:21:44.520992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.184 [2024-07-23 08:21:44.521021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.184 [2024-07-23 08:21:44.521079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff8363 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.184 [2024-07-23 08:21:44.521093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.184 #27 NEW cov: 12090 ft: 13515 corp: 4/54b lim: 30 exec/s: 0 rss: 222Mb L: 15/24 MS: 1 InsertByte- 00:08:32.184 [2024-07-23 08:21:44.590353] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.184 [2024-07-23 08:21:44.590486] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.184 [2024-07-23 08:21:44.590720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.590752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.185 [2024-07-23 08:21:44.590814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.590831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.185 [2024-07-23 08:21:44.630429] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.185 [2024-07-23 08:21:44.630563] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.185 [2024-07-23 08:21:44.630791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.630819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.185 [2024-07-23 08:21:44.630881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.630897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.185 #34 NEW cov: 12176 ft: 13733 corp: 5/68b lim: 30 exec/s: 0 rss: 224Mb L: 14/24 MS: 1 ShuffleBytes- 00:08:32.185 [2024-07-23 08:21:44.694649] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.185 [2024-07-23 08:21:44.694793] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.185 [2024-07-23 08:21:44.694928] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.185 [2024-07-23 08:21:44.695056] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.185 [2024-07-23 08:21:44.695306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.695340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.185 [2024-07-23 08:21:44.695404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.695421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.185 [2024-07-23 08:21:44.695485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.695502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.185 [2024-07-23 08:21:44.695565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.185 [2024-07-23 08:21:44.695581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.444 [2024-07-23 08:21:44.734552] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.444 [2024-07-23 08:21:44.734685] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.444 [2024-07-23 08:21:44.734804] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.444 [2024-07-23 08:21:44.734926] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.444 [2024-07-23 08:21:44.735181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.444 [2024-07-23 08:21:44.735210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.444 [2024-07-23 08:21:44.735282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.444 [2024-07-23 08:21:44.735296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.444 [2024-07-23 08:21:44.735356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.735370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.735423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.735436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.445 #36 NEW cov: 12176 ft: 13844 corp: 6/96b lim: 30 exec/s: 0 rss: 225Mb L: 28/28 MS: 1 CopyPart- 00:08:32.445 [2024-07-23 08:21:44.804086] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.445 [2024-07-23 08:21:44.804235] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.445 [2024-07-23 08:21:44.804364] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003223 00:08:32.445 [2024-07-23 08:21:44.804492] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.445 [2024-07-23 08:21:44.804745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.804778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.804841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.804858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.804921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.804937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.804998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.805014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.445 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:32.445 [2024-07-23 08:21:44.864139] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.445 [2024-07-23 08:21:44.864267] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.445 [2024-07-23 08:21:44.864389] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003223 00:08:32.445 [2024-07-23 08:21:44.864505] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.445 [2024-07-23 08:21:44.864737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.864766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.864824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.864839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.864900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.864914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.864968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.864982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.445 #38 NEW cov: 12193 ft: 13978 corp: 7/121b lim: 30 exec/s: 0 rss: 226Mb L: 25/28 MS: 1 InsertByte- 00:08:32.445 [2024-07-23 08:21:44.923031] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.445 [2024-07-23 08:21:44.923197] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.445 [2024-07-23 08:21:44.923459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.923497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.923568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.923587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.962981] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.445 [2024-07-23 08:21:44.963130] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.445 [2024-07-23 08:21:44.963365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.963395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.445 [2024-07-23 08:21:44.963453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.445 [2024-07-23 08:21:44.963469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.705 #40 NEW cov: 12193 ft: 14034 corp: 8/136b lim: 30 exec/s: 0 rss: 228Mb L: 15/28 MS: 1 InsertByte- 00:08:32.705 [2024-07-23 08:21:45.034458] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.034611] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.034862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.034897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.034960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.034976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.074402] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.074538] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.074769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.074797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.074856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.074871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.705 #47 NEW cov: 12193 ft: 14044 corp: 9/150b lim: 30 exec/s: 47 rss: 229Mb L: 14/28 MS: 1 ShuffleBytes- 00:08:32.705 [2024-07-23 08:21:45.145887] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.146027] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.146157] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.146282] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.146531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.146563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.146627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.146644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.146704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.146719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.146777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.146792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.206046] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.206186] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.206309] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.206425] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.705 [2024-07-23 08:21:45.206665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.206694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.206752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.206767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.206825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.206839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.705 [2024-07-23 08:21:45.206895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.705 [2024-07-23 08:21:45.206909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.964 #49 NEW cov: 12193 ft: 14102 corp: 10/175b lim: 30 exec/s: 49 rss: 230Mb L: 25/28 MS: 1 EraseBytes- 00:08:32.964 [2024-07-23 08:21:45.267741] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.964 [2024-07-23 08:21:45.267902] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff02 00:08:32.964 [2024-07-23 08:21:45.268181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.964 [2024-07-23 08:21:45.268213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.964 [2024-07-23 08:21:45.268280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.964 [2024-07-23 08:21:45.268297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.964 [2024-07-23 08:21:45.327530] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:32.964 [2024-07-23 08:21:45.327666] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff02 00:08:32.964 [2024-07-23 08:21:45.327905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.964 [2024-07-23 08:21:45.327934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.964 [2024-07-23 08:21:45.327993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.964 [2024-07-23 08:21:45.328008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.964 #51 NEW cov: 12193 ft: 14282 corp: 11/190b lim: 30 exec/s: 51 rss: 232Mb L: 15/28 MS: 1 CMP- DE: "\377\377\377\377\377\377\002\333"- 00:08:32.964 [2024-07-23 08:21:45.389362] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.964 [2024-07-23 08:21:45.389531] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.964 [2024-07-23 08:21:45.389681] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003223 00:08:32.964 [2024-07-23 08:21:45.389824] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.964 [2024-07-23 08:21:45.390096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.964 [2024-07-23 08:21:45.390134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.964 [2024-07-23 08:21:45.390200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.964 [2024-07-23 08:21:45.390218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.964 [2024-07-23 08:21:45.390290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.965 [2024-07-23 08:21:45.390306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.965 [2024-07-23 08:21:45.390369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2323832b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.965 [2024-07-23 08:21:45.390384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:32.965 [2024-07-23 08:21:45.449066] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.965 [2024-07-23 08:21:45.449207] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.965 [2024-07-23 08:21:45.449331] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300003223 00:08:32.965 [2024-07-23 08:21:45.449453] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:32.965 [2024-07-23 08:21:45.449695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.965 [2024-07-23 08:21:45.449723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:32.965 [2024-07-23 08:21:45.449782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.965 [2024-07-23 08:21:45.449796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:32.965 [2024-07-23 08:21:45.449858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.965 [2024-07-23 08:21:45.449873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:32.965 [2024-07-23 08:21:45.449932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:2323832b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:32.965 [2024-07-23 08:21:45.449946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.224 #53 NEW cov: 12193 ft: 14418 corp: 12/215b lim: 30 exec/s: 53 rss: 234Mb L: 25/28 MS: 1 ChangeBit- 00:08:33.224 [2024-07-23 08:21:45.520467] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.520605] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.520730] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.520848] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.521080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.521113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.521171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.521186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.521242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.521256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.521315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.521329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.560603] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.560736] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.560861] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.560983] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.561243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.561271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.561330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.561346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.561402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.561416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.561475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.561489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.224 #55 NEW cov: 12193 ft: 14438 corp: 13/243b lim: 30 exec/s: 55 rss: 235Mb L: 28/28 MS: 1 CrossOver- 00:08:33.224 [2024-07-23 08:21:45.622378] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.622545] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.622695] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:33.224 [2024-07-23 08:21:45.622978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.623011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.623077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.623095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.623164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.623181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.682129] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.682263] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.224 [2024-07-23 08:21:45.682390] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:33.224 [2024-07-23 08:21:45.682616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.682644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.682703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.682718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.224 [2024-07-23 08:21:45.682775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.224 [2024-07-23 08:21:45.682790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.224 #57 NEW cov: 12216 ft: 14762 corp: 14/262b lim: 30 exec/s: 57 rss: 237Mb L: 19/28 MS: 1 InsertRepeatedBytes- 00:08:33.224 [2024-07-23 08:21:45.743344] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.224 [2024-07-23 08:21:45.743511] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.743657] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.743801] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.744069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.744104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.744163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.744179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.744240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.744255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.744311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.744325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.783054] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.783235] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.783372] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.783491] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.484 [2024-07-23 08:21:45.783720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.783749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.783807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.783821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.783872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.783886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.783940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e7238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.783954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.484 #59 NEW cov: 12216 ft: 14848 corp: 15/287b lim: 30 exec/s: 59 rss: 238Mb L: 25/28 MS: 1 InsertByte- 00:08:33.484 [2024-07-23 08:21:45.844757] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.484 [2024-07-23 08:21:45.844930] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.484 [2024-07-23 08:21:45.845091] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:33.484 [2024-07-23 08:21:45.845387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.845421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.845495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.845513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.845578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.845596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.904320] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.484 [2024-07-23 08:21:45.904470] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.484 [2024-07-23 08:21:45.904598] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:08:33.484 [2024-07-23 08:21:45.904833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:5bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.904862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.904920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.904935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.904994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.905009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.484 #61 NEW cov: 12216 ft: 14929 corp: 16/307b lim: 30 exec/s: 61 rss: 239Mb L: 20/28 MS: 1 InsertByte- 00:08:33.484 [2024-07-23 08:21:45.975932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.975963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.484 [2024-07-23 08:21:45.976020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.484 [2024-07-23 08:21:45.976035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.742 [2024-07-23 08:21:46.015954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.742 [2024-07-23 08:21:46.015982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.742 [2024-07-23 08:21:46.016048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.742 [2024-07-23 08:21:46.016063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.742 #66 NEW cov: 12233 ft: 15087 corp: 17/320b lim: 30 exec/s: 66 rss: 240Mb L: 13/28 MS: 4 CopyPart-EraseBytes-ChangeByte-InsertRepeatedBytes- 00:08:33.742 [2024-07-23 08:21:46.086358] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.742 [2024-07-23 08:21:46.086506] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.742 [2024-07-23 08:21:46.086637] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.742 [2024-07-23 08:21:46.086769] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.742 [2024-07-23 08:21:46.087019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.742 [2024-07-23 08:21:46.087052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.742 [2024-07-23 08:21:46.087122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.087140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.743 [2024-07-23 08:21:46.087200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.087217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.743 [2024-07-23 08:21:46.087277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23328323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.087292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.743 [2024-07-23 08:21:46.126276] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.743 [2024-07-23 08:21:46.126415] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:33.743 [2024-07-23 08:21:46.126540] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.743 [2024-07-23 08:21:46.126663] ctrlr.c:2646:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002323 00:08:33.743 [2024-07-23 08:21:46.126892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a8323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.126920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:33.743 [2024-07-23 08:21:46.126979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.126994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:33.743 [2024-07-23 08:21:46.127048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23238323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.127063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:33.743 [2024-07-23 08:21:46.127137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:23328323 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.743 [2024-07-23 08:21:46.127152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:33.743 #68 NEW cov: 12233 ft: 15106 corp: 18/348b lim: 30 exec/s: 34 rss: 241Mb L: 28/28 MS: 1 CrossOver- 00:08:33.743 #68 DONE cov: 12233 ft: 15106 corp: 18/348b lim: 30 exec/s: 34 rss: 241Mb 00:08:33.743 ###### Recommended dictionary. ###### 00:08:33.743 "\377\377\377\377\377\377\002\333" # Uses: 0 00:08:33.743 ###### End of recommended dictionary. ###### 00:08:33.743 Done 68 runs in 2 second(s) 00:08:34.308 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:08:34.308 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:34.309 08:21:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:08:34.309 [2024-07-23 08:21:46.639869] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:34.309 [2024-07-23 08:21:46.639974] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622942 ] 00:08:34.309 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.567 [2024-07-23 08:21:46.866381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.567 [2024-07-23 08:21:47.015697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.824 [2024-07-23 08:21:47.253413] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.824 [2024-07-23 08:21:47.269684] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:34.824 INFO: Running with entropic power schedule (0xFF, 100). 00:08:34.824 INFO: Seed: 1356936836 00:08:34.824 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:34.824 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:34.824 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:34.825 INFO: A corpus is not provided, starting from an empty corpus 00:08:34.825 #2 INITED exec/s: 0 rss: 201Mb 00:08:34.825 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:34.825 This may also happen if the target rejected all inputs we tried so far 00:08:34.825 [2024-07-23 08:21:47.328950] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:34.825 [2024-07-23 08:21:47.329082] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:34.825 [2024-07-23 08:21:47.329326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.825 [2024-07-23 08:21:47.329360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:34.825 [2024-07-23 08:21:47.329416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.825 [2024-07-23 08:21:47.329431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:34.825 [2024-07-23 08:21:47.329482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.825 [2024-07-23 08:21:47.329498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.082 NEW_FUNC[1/698]: 0x54ee10 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:35.082 NEW_FUNC[2/698]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:35.082 [2024-07-23 08:21:47.509563] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.082 [2024-07-23 08:21:47.509703] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.082 [2024-07-23 08:21:47.509939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.082 [2024-07-23 08:21:47.509989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.082 [2024-07-23 08:21:47.510062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.082 [2024-07-23 08:21:47.510085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.082 [2024-07-23 08:21:47.510152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.082 [2024-07-23 08:21:47.510174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.082 #8 NEW cov: 12019 ft: 12020 corp: 2/23b lim: 35 exec/s: 0 rss: 219Mb L: 22/22 MS: 5 ChangeBit-ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:35.082 [2024-07-23 08:21:47.589390] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.083 [2024-07-23 08:21:47.589535] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.083 [2024-07-23 08:21:47.589770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.083 [2024-07-23 08:21:47.589801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.083 [2024-07-23 08:21:47.589856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.083 [2024-07-23 08:21:47.589872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.083 [2024-07-23 08:21:47.589926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.083 [2024-07-23 08:21:47.589942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.649338] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.649472] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.649690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.649717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.649773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.649789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.649842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.649857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.341 #10 NEW cov: 12043 ft: 12348 corp: 3/45b lim: 35 exec/s: 0 rss: 221Mb L: 22/22 MS: 1 ShuffleBytes- 00:08:35.341 [2024-07-23 08:21:47.721371] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.721514] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.721754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.721788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.721850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.721869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.721924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.721941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.781321] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.781453] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.781673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.781701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.781761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.781777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.781833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.781849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.341 #12 NEW cov: 12055 ft: 12652 corp: 4/67b lim: 35 exec/s: 0 rss: 223Mb L: 22/22 MS: 1 ShuffleBytes- 00:08:35.341 [2024-07-23 08:21:47.854026] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.854191] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.341 [2024-07-23 08:21:47.854424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.854454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.854510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.854531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.341 [2024-07-23 08:21:47.854588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.341 [2024-07-23 08:21:47.854604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.599 [2024-07-23 08:21:47.913979] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.599 [2024-07-23 08:21:47.914117] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.599 [2024-07-23 08:21:47.914334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.599 [2024-07-23 08:21:47.914379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.599 [2024-07-23 08:21:47.914434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.599 [2024-07-23 08:21:47.914451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.599 [2024-07-23 08:21:47.914507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.599 [2024-07-23 08:21:47.914523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.599 #14 NEW cov: 12141 ft: 12954 corp: 5/89b lim: 35 exec/s: 0 rss: 224Mb L: 22/22 MS: 1 ChangeBinInt- 00:08:35.599 [2024-07-23 08:21:47.988158] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.599 [2024-07-23 08:21:47.988430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.599 [2024-07-23 08:21:47.988468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.599 [2024-07-23 08:21:47.988532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.599 [2024-07-23 08:21:47.988556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.599 [2024-07-23 08:21:48.027807] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.599 [2024-07-23 08:21:48.028039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.600 [2024-07-23 08:21:48.028067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.600 [2024-07-23 08:21:48.028131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.600 [2024-07-23 08:21:48.028150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.600 #16 NEW cov: 12141 ft: 13323 corp: 6/104b lim: 35 exec/s: 0 rss: 226Mb L: 15/22 MS: 1 InsertRepeatedBytes- 00:08:35.600 [2024-07-23 08:21:48.100922] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.600 [2024-07-23 08:21:48.101088] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.600 [2024-07-23 08:21:48.101355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.600 [2024-07-23 08:21:48.101392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.600 [2024-07-23 08:21:48.101462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.600 [2024-07-23 08:21:48.101484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.600 [2024-07-23 08:21:48.101550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.600 [2024-07-23 08:21:48.101574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.160579] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.858 [2024-07-23 08:21:48.160711] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.858 [2024-07-23 08:21:48.160941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.160970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.161031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.161049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.161104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.161119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:35.858 #18 NEW cov: 12141 ft: 13452 corp: 7/126b lim: 35 exec/s: 0 rss: 228Mb L: 22/22 MS: 1 ChangeBinInt- 00:08:35.858 [2024-07-23 08:21:48.232948] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.858 [2024-07-23 08:21:48.233261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.233299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.233368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000c2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.233391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.292650] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.858 [2024-07-23 08:21:48.292884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.292913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.292967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000c2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.292983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.858 #20 NEW cov: 12141 ft: 13616 corp: 8/142b lim: 35 exec/s: 20 rss: 229Mb L: 16/22 MS: 1 InsertByte- 00:08:35.858 [2024-07-23 08:21:48.364421] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:35.858 [2024-07-23 08:21:48.364723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.364760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.364828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6d660076 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.364848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:35.858 [2024-07-23 08:21:48.364923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:35.858 [2024-07-23 08:21:48.364943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.403966] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.404205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.404249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.404306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:6d660076 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.404321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.404374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.404389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.117 #22 NEW cov: 12141 ft: 13708 corp: 9/164b lim: 35 exec/s: 22 rss: 230Mb L: 22/22 MS: 1 CMP- DE: "nvmf"- 00:08:36.117 [2024-07-23 08:21:48.475456] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.475624] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.475907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:00000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.475946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.476014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.476037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.476108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.476130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.515017] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.515154] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.515377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:00000080 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.515405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.515466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.515482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.515532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.515548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.117 #24 NEW cov: 12141 ft: 13721 corp: 10/186b lim: 35 exec/s: 24 rss: 231Mb L: 22/22 MS: 1 ChangeBit- 00:08:36.117 [2024-07-23 08:21:48.586497] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.586664] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.586942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.586980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.587051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.587074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.587148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.587170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.626124] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.626251] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.117 [2024-07-23 08:21:48.626473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.626501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.626556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.626572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.117 [2024-07-23 08:21:48.626625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.117 [2024-07-23 08:21:48.626640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.376 #26 NEW cov: 12141 ft: 13750 corp: 11/208b lim: 35 exec/s: 26 rss: 232Mb L: 22/22 MS: 1 ChangeBinInt- 00:08:36.376 [2024-07-23 08:21:48.697833] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.376 [2024-07-23 08:21:48.697995] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.376 [2024-07-23 08:21:48.698307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.698343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.698409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.698431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.698503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.698524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.757273] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.376 [2024-07-23 08:21:48.757403] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.376 [2024-07-23 08:21:48.757623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000a0018 cdw11:16000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.757651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.757704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.757720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.757774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.757789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.376 #28 NEW cov: 12141 ft: 13822 corp: 12/230b lim: 35 exec/s: 28 rss: 234Mb L: 22/22 MS: 1 ChangeBit- 00:08:36.376 [2024-07-23 08:21:48.828654] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.376 [2024-07-23 08:21:48.828940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.828977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.829047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.829070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.376 [2024-07-23 08:21:48.868260] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.376 [2024-07-23 08:21:48.868489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a000018 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.376 [2024-07-23 08:21:48.868517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.377 [2024-07-23 08:21:48.868569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.377 [2024-07-23 08:21:48.868585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 #30 NEW cov: 12141 ft: 13894 corp: 13/244b lim: 35 exec/s: 30 rss: 235Mb L: 14/22 MS: 1 EraseBytes- 00:08:36.635 [2024-07-23 08:21:48.939886] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.635 [2024-07-23 08:21:48.940045] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.635 [2024-07-23 08:21:48.940317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:006e0018 cdw11:6600766d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-07-23 08:21:48.940355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.635 [2024-07-23 08:21:48.940424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-07-23 08:21:48.940446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.635 [2024-07-23 08:21:48.940519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.635 [2024-07-23 08:21:48.940540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.635 [2024-07-23 08:21:48.979521] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.635 [2024-07-23 08:21:48.979654] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.635 [2024-07-23 08:21:48.979869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:006e0018 cdw11:6600766d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:48.979897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:48.979949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:48.979965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:48.980017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:06000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:48.980032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.636 #32 NEW cov: 12141 ft: 13909 corp: 14/266b lim: 35 exec/s: 32 rss: 236Mb L: 22/22 MS: 1 PersAutoDict- DE: "nvmf"- 00:08:36.636 [2024-07-23 08:21:49.051919] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.636 [2024-07-23 08:21:49.052080] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.636 [2024-07-23 08:21:49.052229] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.636 [2024-07-23 08:21:49.052491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00180018 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.052529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:49.052603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.052627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:49.052700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.052721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:49.052785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.052805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:36.636 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:36.636 [2024-07-23 08:21:49.101614] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.636 [2024-07-23 08:21:49.101744] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.636 [2024-07-23 08:21:49.101866] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.636 [2024-07-23 08:21:49.102085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00180018 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.102122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:49.102181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.102198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:49.102250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.102266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:36.636 [2024-07-23 08:21:49.102317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.636 [2024-07-23 08:21:49.102333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:36.636 #34 NEW cov: 12158 ft: 14420 corp: 15/298b lim: 35 exec/s: 34 rss: 237Mb L: 32/32 MS: 1 CopyPart- 00:08:36.894 [2024-07-23 08:21:49.174486] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.894 [2024-07-23 08:21:49.174625] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.894 [2024-07-23 08:21:49.174846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:16000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-07-23 08:21:49.174878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.894 [2024-07-23 08:21:49.174933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.894 [2024-07-23 08:21:49.174949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.894 [2024-07-23 08:21:49.214147] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.894 [2024-07-23 08:21:49.214277] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.894 [2024-07-23 08:21:49.214500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:16000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.895 [2024-07-23 08:21:49.214529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.895 [2024-07-23 08:21:49.214582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.895 [2024-07-23 08:21:49.214598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.895 #36 NEW cov: 12158 ft: 14456 corp: 16/313b lim: 35 exec/s: 36 rss: 238Mb L: 15/32 MS: 1 CrossOver- 00:08:36.895 [2024-07-23 08:21:49.285161] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.895 [2024-07-23 08:21:49.285320] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.895 [2024-07-23 08:21:49.285595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.895 [2024-07-23 08:21:49.285636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.895 [2024-07-23 08:21:49.285704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.895 [2024-07-23 08:21:49.285730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.895 [2024-07-23 08:21:49.324898] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.895 [2024-07-23 08:21:49.325028] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:36.895 [2024-07-23 08:21:49.325255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.895 [2024-07-23 08:21:49.325285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:36.895 [2024-07-23 08:21:49.325341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:36.895 [2024-07-23 08:21:49.325366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:36.895 #38 NEW cov: 12158 ft: 14482 corp: 17/328b lim: 35 exec/s: 19 rss: 239Mb L: 15/32 MS: 1 EraseBytes- 00:08:36.895 #38 DONE cov: 12158 ft: 14482 corp: 17/328b lim: 35 exec/s: 19 rss: 239Mb 00:08:36.895 ###### Recommended dictionary. ###### 00:08:36.895 "nvmf" # Uses: 1 00:08:36.895 ###### End of recommended dictionary. ###### 00:08:36.895 Done 38 runs in 2 second(s) 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:37.462 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:37.463 08:21:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:08:37.463 [2024-07-23 08:21:49.831586] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:37.463 [2024-07-23 08:21:49.831679] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623398 ] 00:08:37.463 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.721 [2024-07-23 08:21:50.053167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.721 [2024-07-23 08:21:50.207367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.979 [2024-07-23 08:21:50.452421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.979 [2024-07-23 08:21:50.468681] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:37.979 INFO: Running with entropic power schedule (0xFF, 100). 00:08:37.979 INFO: Seed: 263005137 00:08:38.238 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:38.238 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:38.238 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:38.238 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.238 #2 INITED exec/s: 0 rss: 202Mb 00:08:38.238 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.238 This may also happen if the target rejected all inputs we tried so far 00:08:38.238 NEW_FUNC[1/687]: 0x550e50 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:38.238 NEW_FUNC[2/687]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:38.497 #5 NEW cov: 11923 ft: 11896 corp: 2/16b lim: 20 exec/s: 0 rss: 218Mb L: 15/15 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:38.497 #8 NEW cov: 11964 ft: 12481 corp: 3/33b lim: 20 exec/s: 0 rss: 220Mb L: 17/17 MS: 2 CopyPart-CMP- DE: "spdk_nvme_driver"- 00:08:38.754 #10 NEW cov: 11976 ft: 13047 corp: 4/50b lim: 20 exec/s: 0 rss: 222Mb L: 17/17 MS: 1 ChangeByte- 00:08:38.754 #12 NEW cov: 12062 ft: 13244 corp: 5/65b lim: 20 exec/s: 0 rss: 223Mb L: 15/17 MS: 1 ShuffleBytes- 00:08:38.754 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:39.012 #14 NEW cov: 12079 ft: 13388 corp: 6/82b lim: 20 exec/s: 0 rss: 226Mb L: 17/17 MS: 1 ShuffleBytes- 00:08:39.012 #16 NEW cov: 12079 ft: 13502 corp: 7/99b lim: 20 exec/s: 0 rss: 227Mb L: 17/17 MS: 1 ChangeBit- 00:08:39.269 #18 NEW cov: 12079 ft: 13528 corp: 8/116b lim: 20 exec/s: 18 rss: 229Mb L: 17/17 MS: 1 ShuffleBytes- 00:08:39.270 #21 NEW cov: 12079 ft: 13580 corp: 9/133b lim: 20 exec/s: 21 rss: 230Mb L: 17/17 MS: 2 CopyPart-PersAutoDict- DE: "spdk_nvme_driver"- 00:08:39.528 #23 NEW cov: 12079 ft: 13612 corp: 10/151b lim: 20 exec/s: 23 rss: 231Mb L: 18/18 MS: 1 InsertByte- 00:08:39.528 #25 NEW cov: 12079 ft: 13690 corp: 11/168b lim: 20 exec/s: 25 rss: 232Mb L: 17/18 MS: 1 ShuffleBytes- 00:08:39.786 #27 NEW cov: 12079 ft: 13707 corp: 12/186b lim: 20 exec/s: 27 rss: 233Mb L: 18/18 MS: 1 CrossOver- 00:08:39.786 #29 NEW cov: 12079 ft: 13814 corp: 13/204b lim: 20 exec/s: 29 rss: 235Mb L: 18/18 MS: 1 ChangeByte- 00:08:40.044 #31 NEW cov: 12079 ft: 13823 corp: 14/217b lim: 20 exec/s: 31 rss: 236Mb L: 13/18 MS: 1 EraseBytes- 00:08:40.044 #38 NEW cov: 12079 ft: 13832 corp: 15/234b lim: 20 exec/s: 38 rss: 237Mb L: 17/18 MS: 1 ChangeByte- 00:08:40.302 #40 NEW cov: 12080 ft: 13959 corp: 16/250b lim: 20 exec/s: 20 rss: 238Mb L: 16/18 MS: 1 EraseBytes- 00:08:40.302 #40 DONE cov: 12080 ft: 13959 corp: 16/250b lim: 20 exec/s: 20 rss: 239Mb 00:08:40.302 ###### Recommended dictionary. ###### 00:08:40.302 "spdk_nvme_driver" # Uses: 1 00:08:40.302 ###### End of recommended dictionary. ###### 00:08:40.302 Done 40 runs in 2 second(s) 00:08:40.561 08:21:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:40.561 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:40.562 08:21:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:08:40.562 [2024-07-23 08:21:53.057302] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:40.562 [2024-07-23 08:21:53.057389] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623869 ] 00:08:40.820 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.820 [2024-07-23 08:21:53.284911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.078 [2024-07-23 08:21:53.439538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.336 [2024-07-23 08:21:53.680738] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.336 [2024-07-23 08:21:53.697021] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:08:41.336 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.336 INFO: Seed: 3489991250 00:08:41.336 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:41.336 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:41.337 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:41.337 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.337 #2 INITED exec/s: 0 rss: 202Mb 00:08:41.337 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.337 This may also happen if the target rejected all inputs we tried so far 00:08:41.337 [2024-07-23 08:21:53.753299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.337 [2024-07-23 08:21:53.753336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.337 [2024-07-23 08:21:53.753409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.337 [2024-07-23 08:21:53.753424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.337 [2024-07-23 08:21:53.753474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.337 [2024-07-23 08:21:53.753488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.337 [2024-07-23 08:21:53.753533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.337 [2024-07-23 08:21:53.753550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.595 NEW_FUNC[1/699]: 0x552120 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:08:41.595 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:41.595 [2024-07-23 08:21:53.934085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.595 [2024-07-23 08:21:53.934142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:53.934216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:53.934235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:53.934297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:53.934315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:53.934375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:53.934393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.596 #17 NEW cov: 12031 ft: 12003 corp: 2/34b lim: 35 exec/s: 0 rss: 219Mb L: 33/33 MS: 4 ShuffleBytes-InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:41.596 [2024-07-23 08:21:54.011803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.011847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.011911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.011930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.011996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.012014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.012077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.012095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.012167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.012186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.071346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.071376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.071450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.071465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.071521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.071534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.071588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.071602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.596 [2024-07-23 08:21:54.071653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.596 [2024-07-23 08:21:54.071666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.596 #19 NEW cov: 12055 ft: 12462 corp: 3/69b lim: 35 exec/s: 0 rss: 221Mb L: 35/35 MS: 1 CMP- DE: "\377\007"- 00:08:41.858 [2024-07-23 08:21:54.130876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.130917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.130989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.131009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.131076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.131095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.180417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.180445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.180506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.180521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.180571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.180584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.858 #21 NEW cov: 12067 ft: 13155 corp: 4/92b lim: 35 exec/s: 0 rss: 222Mb L: 23/35 MS: 1 EraseBytes- 00:08:41.858 [2024-07-23 08:21:54.241267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.241299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.241358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.241373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.241424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.241438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.241495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.241508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.241564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.241578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.301111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.301140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.301209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.301224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.301275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.301289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.301350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.301364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.301420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.301433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:41.858 #23 NEW cov: 12153 ft: 13480 corp: 5/127b lim: 35 exec/s: 0 rss: 224Mb L: 35/35 MS: 1 ShuffleBytes- 00:08:41.858 [2024-07-23 08:21:54.361605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.361636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.361712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.361728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.361783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.361800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.361854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.361867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:41.858 [2024-07-23 08:21:54.361922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:41.858 [2024-07-23 08:21:54.361935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.421539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.421571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.421650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.421665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.421720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.421734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.421785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.421799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.421852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.421865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.138 #30 NEW cov: 12153 ft: 13536 corp: 6/162b lim: 35 exec/s: 0 rss: 226Mb L: 35/35 MS: 1 ChangeByte- 00:08:42.138 [2024-07-23 08:21:54.484353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00230000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.484388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.484448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.484465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.484527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.484542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.484599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.484614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.484669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.484687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.523864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00230000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.523893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.523950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.523965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.524017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.524031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.138 [2024-07-23 08:21:54.524082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.138 [2024-07-23 08:21:54.524096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.524154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.524167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.139 #37 NEW cov: 12153 ft: 13799 corp: 7/197b lim: 35 exec/s: 0 rss: 227Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:42.139 [2024-07-23 08:21:54.585750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.585788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.585849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.585867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.585925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.585941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.586004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.586020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.586077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.586093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.625275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.625306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.625366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.625382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.625433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.625447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.625498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.625512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.139 [2024-07-23 08:21:54.625560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.139 [2024-07-23 08:21:54.625573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.415 #39 NEW cov: 12153 ft: 13946 corp: 8/232b lim: 35 exec/s: 0 rss: 228Mb L: 35/35 MS: 1 CopyPart- 00:08:42.415 [2024-07-23 08:21:54.686674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.686712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.686778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.686795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.686856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.686872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.686930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.686945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.687011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.687027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.746235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.746268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.746323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.746337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.746391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.746407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.746463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.746477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.746531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.746544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.416 #41 NEW cov: 12153 ft: 14046 corp: 9/267b lim: 35 exec/s: 41 rss: 230Mb L: 35/35 MS: 1 CrossOver- 00:08:42.416 [2024-07-23 08:21:54.817720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.817764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.817827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.817845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.817904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.817919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.817974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00eb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.817989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.818043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.818059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.857596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.857627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.857704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.857719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.857772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.857786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.857838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00eb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.857851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.416 [2024-07-23 08:21:54.857905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.416 [2024-07-23 08:21:54.857921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.416 #43 NEW cov: 12153 ft: 14112 corp: 10/302b lim: 35 exec/s: 43 rss: 230Mb L: 35/35 MS: 1 ChangeByte- 00:08:42.718 [2024-07-23 08:21:54.927930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:54.927967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:54.928028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:54.928045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:54.928105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:54.928121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:54.987929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:54.987961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:54.988019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:54.988034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:54.988085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:54.988104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.718 #45 NEW cov: 12153 ft: 14198 corp: 11/326b lim: 35 exec/s: 45 rss: 232Mb L: 24/35 MS: 1 CopyPart- 00:08:42.718 [2024-07-23 08:21:55.049342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00230000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.049375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.049431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.049447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.049502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.049517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.049569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.049583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.049636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.049654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.109356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00230000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.109385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.109445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.109459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.109509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.109523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.109574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.109588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.109638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.109651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:42.718 #47 NEW cov: 12153 ft: 14234 corp: 12/361b lim: 35 exec/s: 47 rss: 233Mb L: 35/35 MS: 1 ChangeBit- 00:08:42.718 [2024-07-23 08:21:55.180763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.180795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.180848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.180863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.180916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.180929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.180980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00eb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.180993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:42.718 [2024-07-23 08:21:55.181044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:42.718 [2024-07-23 08:21:55.181057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.240917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.240949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.241010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.241025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.241076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.241090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.241145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00eb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.241159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.241211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.241225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.054 #49 NEW cov: 12153 ft: 14244 corp: 13/396b lim: 35 exec/s: 49 rss: 235Mb L: 35/35 MS: 1 CrossOver- 00:08:43.054 [2024-07-23 08:21:55.297949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.297982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.298042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.298057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.298114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.298129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.298182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00eb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.298196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.298248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:e2000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.298262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.357538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.357567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.357620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.357634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.357686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.357702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.357753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00eb0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.357766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.054 [2024-07-23 08:21:55.357818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:e2000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.357830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.054 #51 NEW cov: 12153 ft: 14282 corp: 14/431b lim: 35 exec/s: 51 rss: 237Mb L: 35/35 MS: 1 ChangeByte- 00:08:43.054 [2024-07-23 08:21:55.432404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00237a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.054 [2024-07-23 08:21:55.432439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.432498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.432515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.432580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.432596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.432654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.432669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.432728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.432744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.472218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00237a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.472247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.472318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.472333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.472384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.472398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.472453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.472467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.472517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.472534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.055 #53 NEW cov: 12153 ft: 14313 corp: 15/466b lim: 35 exec/s: 53 rss: 237Mb L: 35/35 MS: 1 ChangeByte- 00:08:43.055 [2024-07-23 08:21:55.534828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.534864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.534924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff000afc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.534941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.535005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.535020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.535079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.535095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.055 [2024-07-23 08:21:55.535157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.055 [2024-07-23 08:21:55.535173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.314 [2024-07-23 08:21:55.594380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:27000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.314 [2024-07-23 08:21:55.594409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.314 [2024-07-23 08:21:55.594464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff000afc cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.314 [2024-07-23 08:21:55.594479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.314 [2024-07-23 08:21:55.594530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00ff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.314 [2024-07-23 08:21:55.594545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.314 [2024-07-23 08:21:55.594596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.314 [2024-07-23 08:21:55.594609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:43.314 [2024-07-23 08:21:55.594661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.314 [2024-07-23 08:21:55.594674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:43.314 #55 NEW cov: 12153 ft: 14324 corp: 16/501b lim: 35 exec/s: 55 rss: 239Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:43.314 [2024-07-23 08:21:55.665365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.314 [2024-07-23 08:21:55.665404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.315 [2024-07-23 08:21:55.665465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.315 [2024-07-23 08:21:55.665481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.315 [2024-07-23 08:21:55.665536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.315 [2024-07-23 08:21:55.665552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.315 [2024-07-23 08:21:55.705246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.315 [2024-07-23 08:21:55.705276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:43.315 [2024-07-23 08:21:55.705336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.315 [2024-07-23 08:21:55.705351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:43.315 [2024-07-23 08:21:55.705403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:43.315 [2024-07-23 08:21:55.705417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:43.315 #57 NEW cov: 12153 ft: 14343 corp: 17/524b lim: 35 exec/s: 28 rss: 240Mb L: 23/35 MS: 1 ShuffleBytes- 00:08:43.315 #57 DONE cov: 12153 ft: 14343 corp: 17/524b lim: 35 exec/s: 28 rss: 240Mb 00:08:43.315 ###### Recommended dictionary. ###### 00:08:43.315 "\377\007" # Uses: 1 00:08:43.315 ###### End of recommended dictionary. ###### 00:08:43.315 Done 57 runs in 2 second(s) 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:43.883 08:21:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:08:43.883 [2024-07-23 08:21:56.218293] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:43.883 [2024-07-23 08:21:56.218381] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624514 ] 00:08:43.883 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.142 [2024-07-23 08:21:56.442869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.142 [2024-07-23 08:21:56.591470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.401 [2024-07-23 08:21:56.829650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.401 [2024-07-23 08:21:56.845913] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:08:44.401 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.401 INFO: Seed: 2344021135 00:08:44.401 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:44.401 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:44.401 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:08:44.401 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.401 #2 INITED exec/s: 0 rss: 202Mb 00:08:44.401 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.401 This may also happen if the target rejected all inputs we tried so far 00:08:44.401 [2024-07-23 08:21:56.914217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.401 [2024-07-23 08:21:56.914261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.659 NEW_FUNC[1/699]: 0x554670 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:08:44.659 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:44.659 [2024-07-23 08:21:57.117923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.659 [2024-07-23 08:21:57.117983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.659 #15 NEW cov: 12042 ft: 12037 corp: 2/17b lim: 45 exec/s: 0 rss: 218Mb L: 16/16 MS: 5 ChangeByte-InsertByte-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:08:44.918 [2024-07-23 08:21:57.197246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.919 [2024-07-23 08:21:57.197284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.919 [2024-07-23 08:21:57.247299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.919 [2024-07-23 08:21:57.247332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.919 #17 NEW cov: 12066 ft: 12551 corp: 3/34b lim: 45 exec/s: 0 rss: 220Mb L: 17/17 MS: 1 CMP- DE: "spdk_nvme_driver"- 00:08:44.919 [2024-07-23 08:21:57.319444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.919 [2024-07-23 08:21:57.319478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.919 [2024-07-23 08:21:57.389835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.919 [2024-07-23 08:21:57.389867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:44.919 #24 NEW cov: 12078 ft: 13010 corp: 4/50b lim: 45 exec/s: 0 rss: 221Mb L: 16/17 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:45.178 [2024-07-23 08:21:57.461237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.178 [2024-07-23 08:21:57.461269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.178 [2024-07-23 08:21:57.461350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.178 [2024-07-23 08:21:57.461369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.178 [2024-07-23 08:21:57.521668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.178 [2024-07-23 08:21:57.521698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.178 [2024-07-23 08:21:57.521784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.178 [2024-07-23 08:21:57.521801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.178 #26 NEW cov: 12164 ft: 14087 corp: 5/75b lim: 45 exec/s: 0 rss: 223Mb L: 25/25 MS: 1 CopyPart- 00:08:45.178 [2024-07-23 08:21:57.592494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff150a00 cdw11:5c1f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.178 [2024-07-23 08:21:57.592525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.178 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:45.178 [2024-07-23 08:21:57.642757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff150a00 cdw11:5c1f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.178 [2024-07-23 08:21:57.642788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.178 #30 NEW cov: 12181 ft: 14138 corp: 6/85b lim: 45 exec/s: 0 rss: 225Mb L: 10/25 MS: 3 CopyPart-InsertByte-CMP- DE: "\377\025\\\0373a\372\250"- 00:08:45.437 [2024-07-23 08:21:57.714321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.714358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.714459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:72695f64 cdw11:76650003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.714478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.714563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:655f766d cdw11:64720003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.714579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.784479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.784516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.784610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:72695f64 cdw11:76650003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.784626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.784701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:655f766d cdw11:64720003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.784716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:45.437 #32 NEW cov: 12181 ft: 14418 corp: 7/114b lim: 45 exec/s: 0 rss: 226Mb L: 29/29 MS: 1 CrossOver- 00:08:45.437 [2024-07-23 08:21:57.856333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:51510100 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.856364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.856448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.856464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.916241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:51510100 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.916270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.437 [2024-07-23 08:21:57.916355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:51515151 cdw11:51510002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.437 [2024-07-23 08:21:57.916370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.437 #36 NEW cov: 12181 ft: 14595 corp: 8/134b lim: 45 exec/s: 36 rss: 227Mb L: 20/29 MS: 3 PersAutoDict-InsertByte-InsertRepeatedBytes- DE: "\001\000\000\000"- 00:08:45.696 [2024-07-23 08:21:57.977777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:57.977806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.696 [2024-07-23 08:21:57.977884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:72315f64 cdw11:69760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:57.977902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.696 [2024-07-23 08:21:58.027655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:58.027684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.696 [2024-07-23 08:21:58.027770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:72315f64 cdw11:69760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:58.027786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.696 #38 NEW cov: 12181 ft: 14639 corp: 9/152b lim: 45 exec/s: 38 rss: 229Mb L: 18/29 MS: 1 InsertByte- 00:08:45.696 [2024-07-23 08:21:58.086135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:db8a0a00 cdw11:2b6e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:58.086167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.696 [2024-07-23 08:21:58.086265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:155c00ff cdw11:1f330003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:58.086281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.696 [2024-07-23 08:21:58.156311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:db8a0a00 cdw11:2b6e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:58.156341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.696 [2024-07-23 08:21:58.156420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:155c00ff cdw11:1f330003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.696 [2024-07-23 08:21:58.156437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.696 #40 NEW cov: 12181 ft: 14670 corp: 10/170b lim: 45 exec/s: 40 rss: 230Mb L: 18/29 MS: 1 CMP- DE: "\333\212+n\037\\\026\000"- 00:08:45.956 [2024-07-23 08:21:58.227196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff150a00 cdw11:5c1f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.956 [2024-07-23 08:21:58.227227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.956 [2024-07-23 08:21:58.277269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff150a00 cdw11:5c1f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.956 [2024-07-23 08:21:58.277298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.956 #42 NEW cov: 12181 ft: 14779 corp: 11/180b lim: 45 exec/s: 42 rss: 231Mb L: 10/29 MS: 1 CopyPart- 00:08:45.956 [2024-07-23 08:21:58.348641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:db8a0a00 cdw11:2b6e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.956 [2024-07-23 08:21:58.348672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.956 [2024-07-23 08:21:58.348755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:155c00ff cdw11:1f320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.956 [2024-07-23 08:21:58.348771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.956 [2024-07-23 08:21:58.419203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:db8a0a00 cdw11:2b6e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.956 [2024-07-23 08:21:58.419234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:45.956 [2024-07-23 08:21:58.419312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:155c00ff cdw11:1f320003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.956 [2024-07-23 08:21:58.419329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:45.956 #44 NEW cov: 12181 ft: 14851 corp: 12/198b lim: 45 exec/s: 44 rss: 232Mb L: 18/29 MS: 1 ChangeASCIIInt- 00:08:46.216 [2024-07-23 08:21:58.495872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.495903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.495987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.496004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.496086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2b6edb8a cdw11:1f5c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.496110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.555772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.555803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.555889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:8a8a8a8a cdw11:8a8a0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.555907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.555994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:2b6edb8a cdw11:1f5c0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.556010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:46.216 #46 NEW cov: 12181 ft: 14904 corp: 13/232b lim: 45 exec/s: 46 rss: 234Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:46.216 [2024-07-23 08:21:58.631174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.631209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.631307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:64726a5f cdw11:69760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.631326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.681515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7370 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.681545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.216 [2024-07-23 08:21:58.681641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:64726a5f cdw11:69760003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.216 [2024-07-23 08:21:58.681658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:46.216 #48 NEW cov: 12181 ft: 14911 corp: 14/250b lim: 45 exec/s: 48 rss: 235Mb L: 18/34 MS: 1 InsertByte- 00:08:46.475 [2024-07-23 08:21:58.755563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.475 [2024-07-23 08:21:58.755597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.476 [2024-07-23 08:21:58.825643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc1ff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.476 [2024-07-23 08:21:58.825674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.476 #50 NEW cov: 12181 ft: 14930 corp: 15/266b lim: 45 exec/s: 50 rss: 236Mb L: 16/34 MS: 1 ChangeByte- 00:08:46.476 [2024-07-23 08:21:58.897367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7372 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.476 [2024-07-23 08:21:58.897406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.476 [2024-07-23 08:21:58.947422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:646b7372 cdw11:5f6e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.476 [2024-07-23 08:21:58.947451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:46.476 #52 NEW cov: 12181 ft: 14949 corp: 16/283b lim: 45 exec/s: 26 rss: 237Mb L: 17/34 MS: 1 ChangeBinInt- 00:08:46.476 #52 DONE cov: 12181 ft: 14949 corp: 16/283b lim: 45 exec/s: 26 rss: 238Mb 00:08:46.476 ###### Recommended dictionary. ###### 00:08:46.476 "spdk_nvme_driver" # Uses: 0 00:08:46.476 "\001\000\000\000" # Uses: 1 00:08:46.476 "\377\025\\\0373a\372\250" # Uses: 0 00:08:46.476 "\333\212+n\037\\\026\000" # Uses: 0 00:08:46.476 ###### End of recommended dictionary. ###### 00:08:46.476 Done 52 runs in 2 second(s) 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:47.044 08:21:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:08:47.044 [2024-07-23 08:21:59.445019] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:47.044 [2024-07-23 08:21:59.445106] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624975 ] 00:08:47.044 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.303 [2024-07-23 08:21:59.663515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.303 [2024-07-23 08:21:59.814406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.562 [2024-07-23 08:22:00.059948] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.562 [2024-07-23 08:22:00.077028] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:08:47.821 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.821 INFO: Seed: 1279059831 00:08:47.821 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:47.821 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:47.821 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:08:47.821 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.821 #2 INITED exec/s: 0 rss: 201Mb 00:08:47.821 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.821 This may also happen if the target rejected all inputs we tried so far 00:08:47.821 [2024-07-23 08:22:00.143291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:08:47.821 [2024-07-23 08:22:00.143330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:47.821 NEW_FUNC[1/697]: 0x557300 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:08:47.821 NEW_FUNC[2/697]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:47.821 [2024-07-23 08:22:00.324550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:08:47.821 [2024-07-23 08:22:00.324597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.079 #4 NEW cov: 11959 ft: 11928 corp: 2/3b lim: 10 exec/s: 0 rss: 219Mb L: 2/2 MS: 1 InsertByte- 00:08:48.079 [2024-07-23 08:22:00.404234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:00000000 00:08:48.079 [2024-07-23 08:22:00.404272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.079 [2024-07-23 08:22:00.454074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:00000000 00:08:48.079 [2024-07-23 08:22:00.454112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.079 #6 NEW cov: 11983 ft: 12378 corp: 3/5b lim: 10 exec/s: 0 rss: 220Mb L: 2/2 MS: 1 InsertByte- 00:08:48.079 [2024-07-23 08:22:00.530734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:48.079 [2024-07-23 08:22:00.530768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.337 [2024-07-23 08:22:00.600958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:48.337 [2024-07-23 08:22:00.600988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.337 #8 NEW cov: 11995 ft: 12741 corp: 4/8b lim: 10 exec/s: 0 rss: 221Mb L: 3/3 MS: 1 InsertByte- 00:08:48.337 [2024-07-23 08:22:00.666969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:48.337 [2024-07-23 08:22:00.667000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.337 [2024-07-23 08:22:00.737092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:48.337 [2024-07-23 08:22:00.737124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.337 #10 NEW cov: 12081 ft: 13207 corp: 5/11b lim: 10 exec/s: 0 rss: 223Mb L: 3/3 MS: 1 CrossOver- 00:08:48.337 [2024-07-23 08:22:00.803137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:00000000 00:08:48.337 [2024-07-23 08:22:00.803170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.337 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:48.595 [2024-07-23 08:22:00.873497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:00000000 00:08:48.595 [2024-07-23 08:22:00.873529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.595 #12 NEW cov: 12098 ft: 13279 corp: 6/13b lim: 10 exec/s: 0 rss: 225Mb L: 2/3 MS: 1 ShuffleBytes- 00:08:48.595 [2024-07-23 08:22:00.939750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:08:48.595 [2024-07-23 08:22:00.939784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.595 [2024-07-23 08:22:00.989535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3a cdw11:00000000 00:08:48.595 [2024-07-23 08:22:00.989565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.595 #14 NEW cov: 12098 ft: 13357 corp: 7/15b lim: 10 exec/s: 0 rss: 226Mb L: 2/3 MS: 1 CopyPart- 00:08:48.595 [2024-07-23 08:22:01.065496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:48.595 [2024-07-23 08:22:01.065528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.595 [2024-07-23 08:22:01.065628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ce7f cdw11:00000000 00:08:48.595 [2024-07-23 08:22:01.065645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.853 [2024-07-23 08:22:01.135692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff0a cdw11:00000000 00:08:48.853 [2024-07-23 08:22:01.135723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.853 [2024-07-23 08:22:01.135823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ce7f cdw11:00000000 00:08:48.853 [2024-07-23 08:22:01.135840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:48.853 #16 NEW cov: 12098 ft: 13668 corp: 8/19b lim: 10 exec/s: 16 rss: 228Mb L: 4/4 MS: 1 InsertByte- 00:08:48.853 [2024-07-23 08:22:01.208935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:08:48.853 [2024-07-23 08:22:01.208967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.853 [2024-07-23 08:22:01.279047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:08:48.853 [2024-07-23 08:22:01.279078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:48.853 #18 NEW cov: 12098 ft: 13848 corp: 9/22b lim: 10 exec/s: 18 rss: 230Mb L: 3/4 MS: 1 InsertByte- 00:08:48.853 [2024-07-23 08:22:01.350839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:00000000 00:08:48.853 [2024-07-23 08:22:01.350873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.420925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007f0a cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.420958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.112 #20 NEW cov: 12098 ft: 13868 corp: 10/25b lim: 10 exec/s: 20 rss: 231Mb L: 3/4 MS: 1 CopyPart- 00:08:49.112 [2024-07-23 08:22:01.496040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.496072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.496169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.496189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.496287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f10a cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.496306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.545822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.545853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.545952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.545968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.546064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f10a cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.546080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.112 #22 NEW cov: 12098 ft: 14057 corp: 11/32b lim: 10 exec/s: 22 rss: 232Mb L: 7/7 MS: 1 CMP- DE: "\377\377\377\361"- 00:08:49.112 [2024-07-23 08:22:01.621344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.621378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.621490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.621509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.621615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.621635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.112 [2024-07-23 08:22:01.621745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.112 [2024-07-23 08:22:01.621764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.671419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.671447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.671541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.671560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.671655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.671671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.671770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.671787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:49.372 #25 NEW cov: 12098 ft: 14373 corp: 12/41b lim: 10 exec/s: 25 rss: 233Mb L: 9/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:08:49.372 [2024-07-23 08:22:01.733642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff30 cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.733672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.733766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a7f cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.733783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.783656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff30 cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.783684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.783782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a7f cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.783798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.372 #27 NEW cov: 12098 ft: 14415 corp: 13/45b lim: 10 exec/s: 27 rss: 234Mb L: 4/9 MS: 1 InsertByte- 00:08:49.372 [2024-07-23 08:22:01.845059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003aff cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.845089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.372 [2024-07-23 08:22:01.845188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a3a cdw11:00000000 00:08:49.372 [2024-07-23 08:22:01.845206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.631 [2024-07-23 08:22:01.895554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003aff cdw11:00000000 00:08:49.631 [2024-07-23 08:22:01.895583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.631 [2024-07-23 08:22:01.895678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a3a cdw11:00000000 00:08:49.631 [2024-07-23 08:22:01.895693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.631 #29 NEW cov: 12098 ft: 14431 corp: 14/49b lim: 10 exec/s: 29 rss: 235Mb L: 4/9 MS: 1 CopyPart- 00:08:49.631 [2024-07-23 08:22:01.967947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.631 [2024-07-23 08:22:01.967987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.631 [2024-07-23 08:22:02.038088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.632 [2024-07-23 08:22:02.038122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.632 #31 NEW cov: 12098 ft: 14481 corp: 15/52b lim: 10 exec/s: 31 rss: 237Mb L: 3/9 MS: 1 CMP- DE: "\377\377"- 00:08:49.632 [2024-07-23 08:22:02.109243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.632 [2024-07-23 08:22:02.109284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.632 [2024-07-23 08:22:02.109393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000007ff cdw11:00000000 00:08:49.632 [2024-07-23 08:22:02.109417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.632 [2024-07-23 08:22:02.109515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f10a cdw11:00000000 00:08:49.632 [2024-07-23 08:22:02.109538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.632 #32 pulse cov: 12098 ft: 14601 corp: 15/52b lim: 10 exec/s: 16 rss: 238Mb 00:08:49.891 [2024-07-23 08:22:02.179344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:49.891 [2024-07-23 08:22:02.179373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.891 [2024-07-23 08:22:02.179470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000007ff cdw11:00000000 00:08:49.891 [2024-07-23 08:22:02.179487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.891 [2024-07-23 08:22:02.179585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000f10a cdw11:00000000 00:08:49.891 [2024-07-23 08:22:02.179601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.891 #33 NEW cov: 12098 ft: 14601 corp: 16/59b lim: 10 exec/s: 16 rss: 239Mb L: 7/9 MS: 1 ChangeByte- 00:08:49.891 #33 DONE cov: 12098 ft: 14601 corp: 16/59b lim: 10 exec/s: 16 rss: 239Mb 00:08:49.891 ###### Recommended dictionary. ###### 00:08:49.891 "\377\377\377\361" # Uses: 0 00:08:49.891 "\377\377" # Uses: 0 00:08:49.891 ###### End of recommended dictionary. ###### 00:08:49.891 Done 33 runs in 2 second(s) 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:50.150 08:22:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:08:50.409 [2024-07-23 08:22:02.680608] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:50.409 [2024-07-23 08:22:02.680729] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625529 ] 00:08:50.409 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.409 [2024-07-23 08:22:02.909375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.668 [2024-07-23 08:22:03.064133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.928 [2024-07-23 08:22:03.308465] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.928 [2024-07-23 08:22:03.324732] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:08:50.928 INFO: Running with entropic power schedule (0xFF, 100). 00:08:50.928 INFO: Seed: 233092003 00:08:50.928 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:50.928 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:50.928 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:08:50.928 INFO: A corpus is not provided, starting from an empty corpus 00:08:50.928 #2 INITED exec/s: 0 rss: 202Mb 00:08:50.928 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:50.928 This may also happen if the target rejected all inputs we tried so far 00:08:50.928 [2024-07-23 08:22:03.380562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:50.928 [2024-07-23 08:22:03.380598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.187 NEW_FUNC[1/697]: 0x557df0 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:08:51.187 NEW_FUNC[2/697]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:51.187 [2024-07-23 08:22:03.551114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:51.187 [2024-07-23 08:22:03.551185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.187 #4 NEW cov: 11945 ft: 11960 corp: 2/3b lim: 10 exec/s: 0 rss: 218Mb L: 2/2 MS: 1 CopyPart- 00:08:51.187 [2024-07-23 08:22:03.624723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:51.187 [2024-07-23 08:22:03.624766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.187 [2024-07-23 08:22:03.664347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:51.187 [2024-07-23 08:22:03.664376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.187 #9 NEW cov: 11983 ft: 12391 corp: 3/5b lim: 10 exec/s: 0 rss: 220Mb L: 2/2 MS: 4 ChangeByte-CrossOver-ShuffleBytes-CopyPart- 00:08:51.446 [2024-07-23 08:22:03.722377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.722410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.446 [2024-07-23 08:22:03.782117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.782145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.446 #11 NEW cov: 11995 ft: 12778 corp: 4/7b lim: 10 exec/s: 0 rss: 221Mb L: 2/2 MS: 1 ShuffleBytes- 00:08:51.446 [2024-07-23 08:22:03.842478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa1 cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.842510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.446 [2024-07-23 08:22:03.842580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a1a1 cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.842596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.446 [2024-07-23 08:22:03.842654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a10a cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.842669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.446 [2024-07-23 08:22:03.902283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa1 cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.902312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.446 [2024-07-23 08:22:03.902362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000a1a1 cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.902376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.446 [2024-07-23 08:22:03.902427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000a10a cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.902440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.446 #13 NEW cov: 12081 ft: 13594 corp: 5/13b lim: 10 exec/s: 0 rss: 224Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:08:51.446 [2024-07-23 08:22:03.963699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 00:08:51.446 [2024-07-23 08:22:03.963733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.705 [2024-07-23 08:22:04.003362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.003391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.705 #17 NEW cov: 12081 ft: 13799 corp: 6/16b lim: 10 exec/s: 0 rss: 225Mb L: 3/6 MS: 3 EraseBytes-ChangeByte-CrossOver- 00:08:51.705 [2024-07-23 08:22:04.064601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.064636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.705 [2024-07-23 08:22:04.064696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.064712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.705 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:51.705 [2024-07-23 08:22:04.104181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.104211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.705 [2024-07-23 08:22:04.104281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.104296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.705 #20 NEW cov: 12098 ft: 14050 corp: 7/21b lim: 10 exec/s: 0 rss: 226Mb L: 5/6 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:51.705 [2024-07-23 08:22:04.174572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000083e3 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.174602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.705 [2024-07-23 08:22:04.174655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.174669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.705 [2024-07-23 08:22:04.174721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008324 cdw11:00000000 00:08:51.705 [2024-07-23 08:22:04.174735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.234741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000083e3 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.234772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.234829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.234844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.234894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008324 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.234908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.965 #22 NEW cov: 12098 ft: 14087 corp: 8/27b lim: 10 exec/s: 0 rss: 228Mb L: 6/6 MS: 1 InsertByte- 00:08:51.965 [2024-07-23 08:22:04.307219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.307251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.307305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.307319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.307371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008324 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.307384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.367058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.367089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.367159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.367174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.367224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008324 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.367237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:51.965 #24 NEW cov: 12098 ft: 14185 corp: 9/33b lim: 10 exec/s: 24 rss: 229Mb L: 6/6 MS: 1 CopyPart- 00:08:51.965 [2024-07-23 08:22:04.434316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a51 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.434352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:51.965 [2024-07-23 08:22:04.473950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a51 cdw11:00000000 00:08:51.965 [2024-07-23 08:22:04.473979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.223 #26 NEW cov: 12098 ft: 14213 corp: 10/36b lim: 10 exec/s: 26 rss: 230Mb L: 3/6 MS: 1 InsertByte- 00:08:52.223 [2024-07-23 08:22:04.539501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b0a cdw11:00000000 00:08:52.223 [2024-07-23 08:22:04.539531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.223 [2024-07-23 08:22:04.589611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000b0a cdw11:00000000 00:08:52.223 [2024-07-23 08:22:04.589639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.223 #28 NEW cov: 12098 ft: 14268 corp: 11/38b lim: 10 exec/s: 28 rss: 232Mb L: 2/6 MS: 1 ChangeBit- 00:08:52.223 [2024-07-23 08:22:04.654313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa9 cdw11:00000000 00:08:52.223 [2024-07-23 08:22:04.654347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.223 [2024-07-23 08:22:04.694198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aa9 cdw11:00000000 00:08:52.223 [2024-07-23 08:22:04.694226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.223 #30 NEW cov: 12098 ft: 14304 corp: 12/41b lim: 10 exec/s: 30 rss: 234Mb L: 3/6 MS: 1 InsertByte- 00:08:52.482 [2024-07-23 08:22:04.754055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008883 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.754091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.754158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.754176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.754239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008324 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.754255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.813496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008883 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.813523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.813579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.813593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.813641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008324 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.813655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.482 #32 NEW cov: 12098 ft: 14402 corp: 13/47b lim: 10 exec/s: 32 rss: 235Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:52.482 [2024-07-23 08:22:04.873901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.873933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.913607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.913635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.482 #34 NEW cov: 12098 ft: 14433 corp: 14/49b lim: 10 exec/s: 34 rss: 236Mb L: 2/6 MS: 1 ShuffleBytes- 00:08:52.482 [2024-07-23 08:22:04.964064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008883 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.964101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.964159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.964175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.482 [2024-07-23 08:22:04.964229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008824 cdw11:00000000 00:08:52.482 [2024-07-23 08:22:04.964244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.741 [2024-07-23 08:22:05.024047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008883 cdw11:00000000 00:08:52.741 [2024-07-23 08:22:05.024075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.741 [2024-07-23 08:22:05.024149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.741 [2024-07-23 08:22:05.024165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.741 [2024-07-23 08:22:05.024215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008824 cdw11:00000000 00:08:52.741 [2024-07-23 08:22:05.024229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.741 #36 NEW cov: 12098 ft: 14471 corp: 15/55b lim: 10 exec/s: 36 rss: 237Mb L: 6/6 MS: 1 CopyPart- 00:08:52.741 [2024-07-23 08:22:05.085499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.741 [2024-07-23 08:22:05.085530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.741 [2024-07-23 08:22:05.085587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.741 [2024-07-23 08:22:05.085603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.741 [2024-07-23 08:22:05.125162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.741 [2024-07-23 08:22:05.125189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.741 [2024-07-23 08:22:05.125244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.742 [2024-07-23 08:22:05.125258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.742 #38 NEW cov: 12098 ft: 14576 corp: 16/60b lim: 10 exec/s: 38 rss: 239Mb L: 5/6 MS: 1 ShuffleBytes- 00:08:52.742 [2024-07-23 08:22:05.175732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008324 cdw11:00000000 00:08:52.742 [2024-07-23 08:22:05.175767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.742 [2024-07-23 08:22:05.175819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.742 [2024-07-23 08:22:05.175835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.742 [2024-07-23 08:22:05.235854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00008324 cdw11:00000000 00:08:52.742 [2024-07-23 08:22:05.235884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.742 [2024-07-23 08:22:05.235935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:08:52.742 [2024-07-23 08:22:05.235950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.001 #40 NEW cov: 12098 ft: 14599 corp: 17/65b lim: 10 exec/s: 40 rss: 240Mb L: 5/6 MS: 1 ShuffleBytes- 00:08:53.001 [2024-07-23 08:22:05.301955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002df6 cdw11:00000000 00:08:53.001 [2024-07-23 08:22:05.301994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.001 [2024-07-23 08:22:05.361842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002df6 cdw11:00000000 00:08:53.001 [2024-07-23 08:22:05.361872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.001 #42 NEW cov: 12098 ft: 14616 corp: 18/68b lim: 10 exec/s: 21 rss: 241Mb L: 3/6 MS: 1 ChangeBinInt- 00:08:53.001 #42 DONE cov: 12098 ft: 14616 corp: 18/68b lim: 10 exec/s: 21 rss: 241Mb 00:08:53.001 Done 42 runs in 2 second(s) 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:53.568 08:22:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:08:53.568 [2024-07-23 08:22:05.861650] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:53.568 [2024-07-23 08:22:05.861755] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626084 ] 00:08:53.568 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.568 [2024-07-23 08:22:06.078415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.826 [2024-07-23 08:22:06.226429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.085 [2024-07-23 08:22:06.466423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.085 [2024-07-23 08:22:06.482693] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:54.085 INFO: Running with entropic power schedule (0xFF, 100). 00:08:54.085 INFO: Seed: 3391116298 00:08:54.085 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:54.085 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:54.085 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:54.085 INFO: A corpus is not provided, starting from an empty corpus 00:08:54.085 [2024-07-23 08:22:06.538526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.085 [2024-07-23 08:22:06.538560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.085 #2 INITED cov: 11987 ft: 11959 corp: 1/1b exec/s: 0 rss: 212Mb 00:08:54.085 [2024-07-23 08:22:06.578548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.085 [2024-07-23 08:22:06.578578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.345 [2024-07-23 08:22:06.638739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.345 [2024-07-23 08:22:06.638771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.345 #4 NEW cov: 12116 ft: 12656 corp: 2/2b lim: 5 exec/s: 0 rss: 219Mb L: 1/1 MS: 1 ShuffleBytes- 00:08:54.345 [2024-07-23 08:22:06.714042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.345 [2024-07-23 08:22:06.714081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.345 [2024-07-23 08:22:06.714161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.345 [2024-07-23 08:22:06.714181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.345 [2024-07-23 08:22:06.763848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.345 [2024-07-23 08:22:06.763878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.345 [2024-07-23 08:22:06.763950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.345 [2024-07-23 08:22:06.763966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.345 #6 NEW cov: 12201 ft: 13725 corp: 3/4b lim: 5 exec/s: 0 rss: 221Mb L: 2/2 MS: 1 InsertByte- 00:08:54.345 [2024-07-23 08:22:06.823242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.345 [2024-07-23 08:22:06.823276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.346 [2024-07-23 08:22:06.823342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.346 [2024-07-23 08:22:06.823360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.346 [2024-07-23 08:22:06.823420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.346 [2024-07-23 08:22:06.823434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.346 [2024-07-23 08:22:06.823491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.346 [2024-07-23 08:22:06.823505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:06.882963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:06.882993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:06.883054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:06.883069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:06.883125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:06.883139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:06.883193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:06.883207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:54.605 #8 NEW cov: 12201 ft: 14381 corp: 4/8b lim: 5 exec/s: 0 rss: 222Mb L: 4/4 MS: 1 CopyPart- 00:08:54.605 [2024-07-23 08:22:06.943243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:06.943281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:06.982949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:06.982978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.605 #10 NEW cov: 12201 ft: 14468 corp: 5/9b lim: 5 exec/s: 0 rss: 224Mb L: 1/4 MS: 1 CrossOver- 00:08:54.605 [2024-07-23 08:22:07.043993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:07.044032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:07.044115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:07.044136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:07.083692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:07.083720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.605 [2024-07-23 08:22:07.083777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.605 [2024-07-23 08:22:07.083791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.605 #12 NEW cov: 12201 ft: 14501 corp: 6/11b lim: 5 exec/s: 0 rss: 225Mb L: 2/4 MS: 1 CrossOver- 00:08:54.864 [2024-07-23 08:22:07.145120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.864 [2024-07-23 08:22:07.145152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.864 [2024-07-23 08:22:07.204875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.864 [2024-07-23 08:22:07.204902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.864 #14 NEW cov: 12201 ft: 14541 corp: 7/12b lim: 5 exec/s: 0 rss: 226Mb L: 1/4 MS: 1 EraseBytes- 00:08:54.864 [2024-07-23 08:22:07.265559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.864 [2024-07-23 08:22:07.265590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.864 [2024-07-23 08:22:07.265652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:54.864 [2024-07-23 08:22:07.265668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.123 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:55.123 [2024-07-23 08:22:07.426194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.123 [2024-07-23 08:22:07.426265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.123 [2024-07-23 08:22:07.426349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.123 [2024-07-23 08:22:07.426378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.123 #16 NEW cov: 12218 ft: 14659 corp: 8/14b lim: 5 exec/s: 0 rss: 228Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:55.123 [2024-07-23 08:22:07.499629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.123 [2024-07-23 08:22:07.499674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.123 [2024-07-23 08:22:07.559564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.123 [2024-07-23 08:22:07.559599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.123 #18 NEW cov: 12218 ft: 14753 corp: 9/15b lim: 5 exec/s: 18 rss: 229Mb L: 1/4 MS: 1 EraseBytes- 00:08:55.123 [2024-07-23 08:22:07.620714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.123 [2024-07-23 08:22:07.620746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.123 [2024-07-23 08:22:07.620810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.123 [2024-07-23 08:22:07.620826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.680317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.680346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.680405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.680420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.382 #20 NEW cov: 12218 ft: 14808 corp: 10/17b lim: 5 exec/s: 20 rss: 231Mb L: 2/4 MS: 1 InsertByte- 00:08:55.382 [2024-07-23 08:22:07.731086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.731125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.731189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.731205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.791201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.791230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.791293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.791307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.382 #22 NEW cov: 12218 ft: 14847 corp: 11/19b lim: 5 exec/s: 22 rss: 233Mb L: 2/4 MS: 1 InsertByte- 00:08:55.382 [2024-07-23 08:22:07.852341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.852374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.852434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.852450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.892151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.892182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.382 [2024-07-23 08:22:07.892258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.382 [2024-07-23 08:22:07.892273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.641 #24 NEW cov: 12218 ft: 14930 corp: 12/21b lim: 5 exec/s: 24 rss: 233Mb L: 2/4 MS: 1 ShuffleBytes- 00:08:55.641 [2024-07-23 08:22:07.954253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.641 [2024-07-23 08:22:07.954288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.641 [2024-07-23 08:22:08.013881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.641 [2024-07-23 08:22:08.013910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.641 #26 NEW cov: 12218 ft: 14944 corp: 13/22b lim: 5 exec/s: 26 rss: 235Mb L: 1/4 MS: 1 EraseBytes- 00:08:55.641 [2024-07-23 08:22:08.075061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.641 [2024-07-23 08:22:08.075106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.641 [2024-07-23 08:22:08.114725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.641 [2024-07-23 08:22:08.114754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.641 #28 NEW cov: 12218 ft: 14969 corp: 14/23b lim: 5 exec/s: 28 rss: 236Mb L: 1/4 MS: 1 ChangeBit- 00:08:55.900 [2024-07-23 08:22:08.176193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.900 [2024-07-23 08:22:08.176227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.900 [2024-07-23 08:22:08.235870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.900 [2024-07-23 08:22:08.235899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.900 #30 NEW cov: 12218 ft: 15014 corp: 15/24b lim: 5 exec/s: 30 rss: 237Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:55.900 [2024-07-23 08:22:08.307106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.900 [2024-07-23 08:22:08.307136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.900 [2024-07-23 08:22:08.367253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:55.900 [2024-07-23 08:22:08.367283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.900 #32 NEW cov: 12218 ft: 15123 corp: 16/25b lim: 5 exec/s: 32 rss: 239Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:56.159 [2024-07-23 08:22:08.438932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.438964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.478911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.478940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.159 #34 NEW cov: 12218 ft: 15126 corp: 17/26b lim: 5 exec/s: 34 rss: 240Mb L: 1/4 MS: 1 ChangeBit- 00:08:56.159 [2024-07-23 08:22:08.540638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.540677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.540751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.540772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.540847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.540868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.540941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.540959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.580200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.580228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.580295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.580310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.580363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.580376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.159 [2024-07-23 08:22:08.580430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:56.159 [2024-07-23 08:22:08.580443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.160 #36 NEW cov: 12218 ft: 15141 corp: 18/30b lim: 5 exec/s: 18 rss: 242Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:08:56.160 #36 DONE cov: 12218 ft: 15141 corp: 18/30b lim: 5 exec/s: 18 rss: 242Mb 00:08:56.160 Done 36 runs in 2 second(s) 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:56.727 08:22:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:56.727 [2024-07-23 08:22:09.075779] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:56.727 [2024-07-23 08:22:09.075872] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626541 ] 00:08:56.727 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.986 [2024-07-23 08:22:09.298288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.986 [2024-07-23 08:22:09.450222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.244 [2024-07-23 08:22:09.696948] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.244 [2024-07-23 08:22:09.713292] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:57.244 INFO: Running with entropic power schedule (0xFF, 100). 00:08:57.244 INFO: Seed: 2328101501 00:08:57.244 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:08:57.244 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:08:57.244 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:57.244 INFO: A corpus is not provided, starting from an empty corpus 00:08:57.502 [2024-07-23 08:22:09.769011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.502 [2024-07-23 08:22:09.769044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.502 #2 INITED cov: 11984 ft: 11941 corp: 1/1b exec/s: 0 rss: 212Mb 00:08:57.502 [2024-07-23 08:22:09.808993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.502 [2024-07-23 08:22:09.809025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.502 NEW_FUNC[1/1]: 0x11ed510 in rte_rdtsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:31 00:08:57.502 [2024-07-23 08:22:09.979689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.502 [2024-07-23 08:22:09.979734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.761 #4 NEW cov: 12116 ft: 12651 corp: 2/2b lim: 5 exec/s: 0 rss: 219Mb L: 1/1 MS: 1 CopyPart- 00:08:57.761 [2024-07-23 08:22:10.059070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.059118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.059217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.059233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.059320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.059335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.059426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.059442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.128867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.128900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.128985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.129001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.129086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.129104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.129138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.129148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:57.761 #6 NEW cov: 12201 ft: 13912 corp: 3/6b lim: 5 exec/s: 0 rss: 221Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:08:57.761 [2024-07-23 08:22:10.203489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.203523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.761 [2024-07-23 08:22:10.253324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:57.761 [2024-07-23 08:22:10.253352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:57.761 #8 NEW cov: 12201 ft: 14111 corp: 4/7b lim: 5 exec/s: 0 rss: 222Mb L: 1/4 MS: 1 ChangeByte- 00:08:58.020 [2024-07-23 08:22:10.314396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.314426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.314513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.314527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.314614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.314629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.314715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.314730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.384458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.384487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.384579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.384594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.384687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.384703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.020 [2024-07-23 08:22:10.384787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.384802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.020 #10 NEW cov: 12201 ft: 14258 corp: 5/11b lim: 5 exec/s: 0 rss: 224Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:58.020 [2024-07-23 08:22:10.451109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.451148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.020 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:08:58.020 [2024-07-23 08:22:10.521227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.020 [2024-07-23 08:22:10.521257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.279 #12 NEW cov: 12218 ft: 14394 corp: 6/12b lim: 5 exec/s: 0 rss: 226Mb L: 1/4 MS: 1 ShuffleBytes- 00:08:58.279 [2024-07-23 08:22:10.583390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.279 [2024-07-23 08:22:10.583431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.279 [2024-07-23 08:22:10.583540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.279 [2024-07-23 08:22:10.583563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.279 [2024-07-23 08:22:10.583656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.279 [2024-07-23 08:22:10.583676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.279 [2024-07-23 08:22:10.583772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.583793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.583887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.583907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.653674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.653705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.653797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.653814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.653891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.653905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.653993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.654009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.654090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.654113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:58.280 #14 NEW cov: 12218 ft: 14495 corp: 7/17b lim: 5 exec/s: 0 rss: 227Mb L: 5/5 MS: 1 CrossOver- 00:08:58.280 [2024-07-23 08:22:10.728405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.728438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.728527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.728543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.728632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.728648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.728741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.728760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.728845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.728861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.788084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.788135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.788220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.788237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.788329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.788346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.788435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.788452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:58.280 [2024-07-23 08:22:10.788536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.280 [2024-07-23 08:22:10.788553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:58.539 #16 NEW cov: 12218 ft: 14532 corp: 8/22b lim: 5 exec/s: 16 rss: 228Mb L: 5/5 MS: 1 CopyPart- 00:08:58.539 [2024-07-23 08:22:10.864239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.539 [2024-07-23 08:22:10.864273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.539 [2024-07-23 08:22:10.924202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.539 [2024-07-23 08:22:10.924235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.539 #18 NEW cov: 12218 ft: 14599 corp: 9/23b lim: 5 exec/s: 18 rss: 230Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:58.539 [2024-07-23 08:22:10.994736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.539 [2024-07-23 08:22:10.994765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.539 [2024-07-23 08:22:11.044954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.539 [2024-07-23 08:22:11.044984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.798 #20 NEW cov: 12218 ft: 14618 corp: 10/24b lim: 5 exec/s: 20 rss: 231Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:58.798 [2024-07-23 08:22:11.116276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.798 [2024-07-23 08:22:11.116306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.798 [2024-07-23 08:22:11.166605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.798 [2024-07-23 08:22:11.166633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.798 #22 NEW cov: 12218 ft: 14702 corp: 11/25b lim: 5 exec/s: 22 rss: 232Mb L: 1/5 MS: 1 ChangeBinInt- 00:08:58.798 [2024-07-23 08:22:11.232064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.798 [2024-07-23 08:22:11.232102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.798 [2024-07-23 08:22:11.282080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:58.798 [2024-07-23 08:22:11.282114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:58.798 #24 NEW cov: 12218 ft: 14763 corp: 12/26b lim: 5 exec/s: 24 rss: 234Mb L: 1/5 MS: 1 CopyPart- 00:08:59.056 [2024-07-23 08:22:11.340797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.056 [2024-07-23 08:22:11.340827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.056 [2024-07-23 08:22:11.410998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.056 [2024-07-23 08:22:11.411029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.056 #26 NEW cov: 12218 ft: 14872 corp: 13/27b lim: 5 exec/s: 26 rss: 235Mb L: 1/5 MS: 1 CrossOver- 00:08:59.056 [2024-07-23 08:22:11.471114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.056 [2024-07-23 08:22:11.471152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.056 [2024-07-23 08:22:11.541203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.056 [2024-07-23 08:22:11.541234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.056 #28 NEW cov: 12218 ft: 14946 corp: 14/28b lim: 5 exec/s: 28 rss: 237Mb L: 1/5 MS: 1 CopyPart- 00:08:59.315 [2024-07-23 08:22:11.602744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.602781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.652898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.652926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.315 #30 NEW cov: 12218 ft: 15013 corp: 15/29b lim: 5 exec/s: 30 rss: 238Mb L: 1/5 MS: 1 ChangeBinInt- 00:08:59.315 [2024-07-23 08:22:11.720946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.720975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.721070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.721088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.721176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.721193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.721275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.721289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.721373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.721389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.771154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.771183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.771278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.771294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.771384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.315 [2024-07-23 08:22:11.771398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.315 [2024-07-23 08:22:11.771490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.316 [2024-07-23 08:22:11.771506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:59.316 [2024-07-23 08:22:11.771593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:59.316 [2024-07-23 08:22:11.771609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:59.316 #32 NEW cov: 12218 ft: 15036 corp: 16/34b lim: 5 exec/s: 16 rss: 239Mb L: 5/5 MS: 1 InsertByte- 00:08:59.316 #32 DONE cov: 12218 ft: 15036 corp: 16/34b lim: 5 exec/s: 16 rss: 239Mb 00:08:59.316 Done 32 runs in 2 second(s) 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:59.883 08:22:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:59.883 [2024-07-23 08:22:12.275661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:59.883 [2024-07-23 08:22:12.275750] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627158 ] 00:08:59.883 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.142 [2024-07-23 08:22:12.497743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.142 [2024-07-23 08:22:12.646338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.400 [2024-07-23 08:22:12.885610] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.400 [2024-07-23 08:22:12.901876] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:09:00.400 INFO: Running with entropic power schedule (0xFF, 100). 00:09:00.400 INFO: Seed: 1219164444 00:09:00.659 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:00.659 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:00.659 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:00.659 INFO: A corpus is not provided, starting from an empty corpus 00:09:00.659 #2 INITED exec/s: 0 rss: 201Mb 00:09:00.659 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:00.659 This may also happen if the target rejected all inputs we tried so far 00:09:00.659 [2024-07-23 08:22:12.961454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:12.961488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.659 [2024-07-23 08:22:12.961555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:12.961569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.659 [2024-07-23 08:22:12.961629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:12.961643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.659 [2024-07-23 08:22:12.961703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:12.961717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.659 NEW_FUNC[1/698]: 0x5599c0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:09:00.659 NEW_FUNC[2/698]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:00.659 [2024-07-23 08:22:13.132258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:13.132315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.659 [2024-07-23 08:22:13.132401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:13.132423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.659 [2024-07-23 08:22:13.132498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:13.132517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.659 [2024-07-23 08:22:13.132588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.659 [2024-07-23 08:22:13.132608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.659 #10 NEW cov: 11999 ft: 12003 corp: 2/34b lim: 40 exec/s: 0 rss: 218Mb L: 33/33 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:00.930 [2024-07-23 08:22:13.209044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.209090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.209171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.209191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.209262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.209282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.209357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.209379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.268707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.268737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.268798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0affff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.268813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.268869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.268882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.268939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.268952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.930 #12 NEW cov: 12034 ft: 12350 corp: 3/67b lim: 40 exec/s: 0 rss: 220Mb L: 33/33 MS: 1 CrossOver- 00:09:00.930 [2024-07-23 08:22:13.337375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affff5f cdw11:40000c29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.337412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.377301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affff5f cdw11:40000c29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.377332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.930 #15 NEW cov: 12046 ft: 13254 corp: 4/76b lim: 40 exec/s: 0 rss: 221Mb L: 9/33 MS: 2 CopyPart-CMP- DE: "\377\377_@\000\014)\353"- 00:09:00.930 [2024-07-23 08:22:13.437344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffff6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.437377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.437439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:766d66ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.437453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.437517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.437531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.930 [2024-07-23 08:22:13.437591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.930 [2024-07-23 08:22:13.437607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.477182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffff6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.477214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.477291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:766d66ff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.477306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.477368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.477384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.477442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.477456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.188 #17 NEW cov: 12132 ft: 13732 corp: 5/113b lim: 40 exec/s: 0 rss: 223Mb L: 37/37 MS: 1 CMP- DE: "nvmf"- 00:09:01.188 [2024-07-23 08:22:13.537821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:5f40000c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.537853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.577553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0affff cdw11:5f40000c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.577582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.188 #19 NEW cov: 12132 ft: 14033 corp: 6/121b lim: 40 exec/s: 0 rss: 224Mb L: 8/37 MS: 1 CrossOver- 00:09:01.188 [2024-07-23 08:22:13.637840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.637881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.637951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6effffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.637969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.638035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.638052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.188 [2024-07-23 08:22:13.638126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.638144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.188 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:01.188 [2024-07-23 08:22:13.697616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.188 [2024-07-23 08:22:13.697647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.189 [2024-07-23 08:22:13.697727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6effffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.189 [2024-07-23 08:22:13.697742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.189 [2024-07-23 08:22:13.697802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.189 [2024-07-23 08:22:13.697816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.189 [2024-07-23 08:22:13.697876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.189 [2024-07-23 08:22:13.697889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.447 #21 NEW cov: 12149 ft: 14129 corp: 7/155b lim: 40 exec/s: 0 rss: 226Mb L: 34/37 MS: 1 InsertByte- 00:09:01.447 [2024-07-23 08:22:13.766629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.766670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.766746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ecffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.766766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.766835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.766854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.766925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.766942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.806203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.806234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.806296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ecffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.806311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.806369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.806383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.806440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.806454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.447 #23 NEW cov: 12149 ft: 14174 corp: 8/189b lim: 40 exec/s: 0 rss: 228Mb L: 34/37 MS: 1 InsertByte- 00:09:01.447 [2024-07-23 08:22:13.858093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.858135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.858198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ecffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.858214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.858271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.858286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.858346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.858360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.917712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.917741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.917816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ecffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.917831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.917889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.917903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.447 [2024-07-23 08:22:13.917962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.447 [2024-07-23 08:22:13.917976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.447 #25 NEW cov: 12149 ft: 14220 corp: 9/228b lim: 40 exec/s: 25 rss: 229Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:09:01.447 [2024-07-23 08:22:13.964927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.448 [2024-07-23 08:22:13.964965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.448 [2024-07-23 08:22:13.965034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.448 [2024-07-23 08:22:13.965053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.448 [2024-07-23 08:22:13.965124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffecff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.448 [2024-07-23 08:22:13.965143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.448 [2024-07-23 08:22:13.965210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.448 [2024-07-23 08:22:13.965231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.448 [2024-07-23 08:22:13.965297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.448 [2024-07-23 08:22:13.965315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.004593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.004622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.004684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.004698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.004755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffecff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.004768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.004826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.004838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.004902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.004916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.707 #27 NEW cov: 12149 ft: 14339 corp: 10/268b lim: 40 exec/s: 27 rss: 230Mb L: 40/40 MS: 1 CrossOver- 00:09:01.707 [2024-07-23 08:22:14.058800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.058842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.058924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0aff05 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.058945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.059019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0505ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.059039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.059129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.059150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.098282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.098311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.098391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff0aff05 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.098405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.098465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:0505ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.098479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.098537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.098551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.707 #29 NEW cov: 12149 ft: 14426 corp: 11/304b lim: 40 exec/s: 29 rss: 232Mb L: 36/40 MS: 1 InsertRepeatedBytes- 00:09:01.707 [2024-07-23 08:22:14.148213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.148255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.148335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6effffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.148356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.148430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.148450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.148523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.148543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.707 [2024-07-23 08:22:14.207801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.707 [2024-07-23 08:22:14.207830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.708 [2024-07-23 08:22:14.207889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:6effffff cdw11:ffff0aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.708 [2024-07-23 08:22:14.207903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.708 [2024-07-23 08:22:14.207962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:fffffeff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.708 [2024-07-23 08:22:14.207975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.708 [2024-07-23 08:22:14.208032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.708 [2024-07-23 08:22:14.208045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.967 #31 NEW cov: 12149 ft: 14466 corp: 12/338b lim: 40 exec/s: 31 rss: 233Mb L: 34/40 MS: 1 ChangeBit- 00:09:01.967 [2024-07-23 08:22:14.266527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affff5f cdw11:40000c29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.266560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.266624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ebffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.266640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.266701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffecff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.266716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.266776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.266791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.266853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.266867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.326329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0affff5f cdw11:40000c29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.326358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.326433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ebffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.326448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.326505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffecff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.326519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.326577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.326591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.326650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.326664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.967 #33 NEW cov: 12149 ft: 14550 corp: 13/378b lim: 40 exec/s: 33 rss: 234Mb L: 40/40 MS: 1 CrossOver- 00:09:01.967 [2024-07-23 08:22:14.397721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.397752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.397813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0aff cdw11:050505ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.397831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.397888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.397902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.397960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.397974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.437790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.437819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.437881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0aff cdw11:050505ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.437895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.437952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.437965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.967 [2024-07-23 08:22:14.438024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.967 [2024-07-23 08:22:14.438037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.967 #35 NEW cov: 12149 ft: 14573 corp: 14/411b lim: 40 exec/s: 35 rss: 236Mb L: 33/40 MS: 1 CrossOver- 00:09:02.226 [2024-07-23 08:22:14.488429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.488462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.226 [2024-07-23 08:22:14.488524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0aff cdw11:050505ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.488539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.226 [2024-07-23 08:22:14.488598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.488613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.226 [2024-07-23 08:22:14.488672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.488686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.226 [2024-07-23 08:22:14.488753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.488767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.226 [2024-07-23 08:22:14.548540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.548569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.226 [2024-07-23 08:22:14.548629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffff0aff cdw11:050505ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.226 [2024-07-23 08:22:14.548643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.548697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.548711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.548775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.548789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.548846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.548859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:02.227 #37 NEW cov: 12149 ft: 14627 corp: 15/451b lim: 40 exec/s: 37 rss: 237Mb L: 40/40 MS: 1 CopyPart- 00:09:02.227 [2024-07-23 08:22:14.599651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.599685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.599747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.599763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.599823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0aff cdw11:050505ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.599838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.599904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.599919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.659728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.659758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.659819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.659833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.659895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0aff cdw11:050505ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.659911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.659971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.659985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.227 #39 NEW cov: 12149 ft: 14694 corp: 16/487b lim: 40 exec/s: 39 rss: 239Mb L: 36/40 MS: 1 CrossOver- 00:09:02.227 [2024-07-23 08:22:14.720809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffff6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.720845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.720935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:766dffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.720952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.721022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.721037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.227 [2024-07-23 08:22:14.721105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff0a cdw11:ff05ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.227 [2024-07-23 08:22:14.721121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.780524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2affffff cdw11:ffffff6e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.780555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.780631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:766dffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.780647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.780713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.780728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.780788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffff0a cdw11:ff05ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.780803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.486 #41 NEW cov: 12149 ft: 14704 corp: 17/524b lim: 40 exec/s: 41 rss: 240Mb L: 37/40 MS: 1 CrossOver- 00:09:02.486 [2024-07-23 08:22:14.842596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffecffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.842633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.842703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.842724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.902233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffecffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.902280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.486 [2024-07-23 08:22:14.902351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:02.486 [2024-07-23 08:22:14.902366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:02.486 #43 NEW cov: 12149 ft: 14932 corp: 18/547b lim: 40 exec/s: 21 rss: 241Mb L: 23/40 MS: 1 EraseBytes- 00:09:02.486 #43 DONE cov: 12149 ft: 14932 corp: 18/547b lim: 40 exec/s: 21 rss: 241Mb 00:09:02.486 ###### Recommended dictionary. ###### 00:09:02.486 "\377\377_@\000\014)\353" # Uses: 0 00:09:02.486 "nvmf" # Uses: 0 00:09:02.486 ###### End of recommended dictionary. ###### 00:09:02.486 Done 43 runs in 2 second(s) 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:03.055 08:22:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:09:03.055 [2024-07-23 08:22:15.404651] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:03.055 [2024-07-23 08:22:15.404741] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627650 ] 00:09:03.055 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.314 [2024-07-23 08:22:15.623852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.314 [2024-07-23 08:22:15.773328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.573 [2024-07-23 08:22:16.010596] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.573 [2024-07-23 08:22:16.026859] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:09:03.573 INFO: Running with entropic power schedule (0xFF, 100). 00:09:03.573 INFO: Seed: 51189068 00:09:03.573 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:03.573 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:03.573 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:03.573 INFO: A corpus is not provided, starting from an empty corpus 00:09:03.573 #2 INITED exec/s: 0 rss: 202Mb 00:09:03.573 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:03.573 This may also happen if the target rejected all inputs we tried so far 00:09:03.832 [2024-07-23 08:22:16.094517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.832 [2024-07-23 08:22:16.094558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.832 NEW_FUNC[1/699]: 0x55bae0 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:09:03.832 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:03.832 [2024-07-23 08:22:16.284934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.832 [2024-07-23 08:22:16.284983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.832 #30 NEW cov: 12020 ft: 11988 corp: 2/10b lim: 40 exec/s: 0 rss: 218Mb L: 9/9 MS: 2 CopyPart-CMP- DE: "L\000\000\000\000\000\000\000"- 00:09:04.092 [2024-07-23 08:22:16.365293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:5c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.092 [2024-07-23 08:22:16.365333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.092 [2024-07-23 08:22:16.435423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:5c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.092 [2024-07-23 08:22:16.435452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.092 #32 NEW cov: 12046 ft: 12528 corp: 3/20b lim: 40 exec/s: 0 rss: 220Mb L: 10/10 MS: 1 InsertByte- 00:09:04.092 [2024-07-23 08:22:16.510306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.092 [2024-07-23 08:22:16.510337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.092 [2024-07-23 08:22:16.559901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.092 [2024-07-23 08:22:16.559934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.092 #34 NEW cov: 12058 ft: 13098 corp: 4/29b lim: 40 exec/s: 0 rss: 221Mb L: 9/10 MS: 1 ChangeBit- 00:09:04.351 [2024-07-23 08:22:16.628799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:5c004c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.351 [2024-07-23 08:22:16.628832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.351 [2024-07-23 08:22:16.698929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:5c004c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.351 [2024-07-23 08:22:16.698962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.351 #36 NEW cov: 12144 ft: 13419 corp: 5/44b lim: 40 exec/s: 0 rss: 224Mb L: 15/15 MS: 1 CrossOver- 00:09:04.351 [2024-07-23 08:22:16.766620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50434900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.351 [2024-07-23 08:22:16.766659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.351 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:04.351 [2024-07-23 08:22:16.836551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50434900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.351 [2024-07-23 08:22:16.836583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.351 #38 NEW cov: 12161 ft: 13524 corp: 6/56b lim: 40 exec/s: 0 rss: 225Mb L: 12/15 MS: 1 CMP- DE: "PCI"- 00:09:04.610 [2024-07-23 08:22:16.899822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50430100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.610 [2024-07-23 08:22:16.899855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.610 [2024-07-23 08:22:16.899965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00104900 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.610 [2024-07-23 08:22:16.899982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.610 [2024-07-23 08:22:16.969919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50430100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.611 [2024-07-23 08:22:16.969948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.611 [2024-07-23 08:22:16.970038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00104900 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.611 [2024-07-23 08:22:16.970055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.611 #40 NEW cov: 12161 ft: 14334 corp: 7/72b lim: 40 exec/s: 0 rss: 227Mb L: 16/16 MS: 1 CMP- DE: "\001\000\000\020"- 00:09:04.611 [2024-07-23 08:22:17.039091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:5c004c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.611 [2024-07-23 08:22:17.039134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.611 [2024-07-23 08:22:17.108899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:5c004c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.611 [2024-07-23 08:22:17.108928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.870 #42 NEW cov: 12161 ft: 14364 corp: 8/87b lim: 40 exec/s: 42 rss: 228Mb L: 15/16 MS: 1 ChangeByte- 00:09:04.870 [2024-07-23 08:22:17.180419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50434900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.870 [2024-07-23 08:22:17.180459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.870 [2024-07-23 08:22:17.240297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50434900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.870 [2024-07-23 08:22:17.240330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.870 #44 NEW cov: 12161 ft: 14417 corp: 9/99b lim: 40 exec/s: 44 rss: 230Mb L: 12/16 MS: 1 ChangeBit- 00:09:04.870 [2024-07-23 08:22:17.311885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.870 [2024-07-23 08:22:17.311921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.870 [2024-07-23 08:22:17.361913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.870 [2024-07-23 08:22:17.361943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.130 #46 NEW cov: 12161 ft: 14517 corp: 10/112b lim: 40 exec/s: 46 rss: 231Mb L: 13/16 MS: 1 CopyPart- 00:09:05.130 [2024-07-23 08:22:17.428304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.130 [2024-07-23 08:22:17.428336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.130 [2024-07-23 08:22:17.478171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.130 [2024-07-23 08:22:17.478204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.130 #48 NEW cov: 12161 ft: 14587 corp: 11/121b lim: 40 exec/s: 48 rss: 232Mb L: 9/16 MS: 1 ChangeBinInt- 00:09:05.130 [2024-07-23 08:22:17.553164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:000000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.130 [2024-07-23 08:22:17.553200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.130 [2024-07-23 08:22:17.623148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:000000f1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.130 [2024-07-23 08:22:17.623182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.388 #50 NEW cov: 12161 ft: 14618 corp: 12/134b lim: 40 exec/s: 50 rss: 234Mb L: 13/16 MS: 1 ChangeByte- 00:09:05.388 [2024-07-23 08:22:17.697546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:33717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.388 [2024-07-23 08:22:17.697582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.388 [2024-07-23 08:22:17.697679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.697699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.389 [2024-07-23 08:22:17.757262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:33717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.757292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.389 [2024-07-23 08:22:17.757391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:71717171 cdw11:71717171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.757407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.389 #54 NEW cov: 12161 ft: 14660 corp: 13/151b lim: 40 exec/s: 54 rss: 235Mb L: 17/17 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:09:05.389 [2024-07-23 08:22:17.831250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.831280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.389 [2024-07-23 08:22:17.831371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.831386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.389 [2024-07-23 08:22:17.891151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.891181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.389 [2024-07-23 08:22:17.891289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.389 [2024-07-23 08:22:17.891321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.648 #56 NEW cov: 12161 ft: 14696 corp: 14/173b lim: 40 exec/s: 56 rss: 236Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:09:05.648 [2024-07-23 08:22:17.952364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:5c004c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.648 [2024-07-23 08:22:17.952397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.648 [2024-07-23 08:22:18.022434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:21000000 cdw11:5c004c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.648 [2024-07-23 08:22:18.022464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.648 #58 NEW cov: 12161 ft: 14759 corp: 15/181b lim: 40 exec/s: 58 rss: 237Mb L: 8/22 MS: 1 EraseBytes- 00:09:05.648 [2024-07-23 08:22:18.099792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50004900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.648 [2024-07-23 08:22:18.099823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.648 [2024-07-23 08:22:18.099913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10000100 cdw11:4300000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.648 [2024-07-23 08:22:18.099929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.907 [2024-07-23 08:22:18.169700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4d000000 cdw11:50004900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.907 [2024-07-23 08:22:18.169733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.907 [2024-07-23 08:22:18.169826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10000100 cdw11:4300000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.907 [2024-07-23 08:22:18.169842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.907 #60 NEW cov: 12161 ft: 14772 corp: 16/197b lim: 40 exec/s: 30 rss: 239Mb L: 16/22 MS: 1 ShuffleBytes- 00:09:05.907 #60 DONE cov: 12161 ft: 14772 corp: 16/197b lim: 40 exec/s: 30 rss: 239Mb 00:09:05.907 ###### Recommended dictionary. ###### 00:09:05.907 "L\000\000\000\000\000\000\000" # Uses: 0 00:09:05.907 "PCI" # Uses: 0 00:09:05.907 "\001\000\000\020" # Uses: 0 00:09:05.907 ###### End of recommended dictionary. ###### 00:09:05.907 Done 60 runs in 2 second(s) 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:06.166 08:22:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:09:06.166 [2024-07-23 08:22:18.676193] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:06.166 [2024-07-23 08:22:18.676299] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628109 ] 00:09:06.425 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.425 [2024-07-23 08:22:18.911497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.684 [2024-07-23 08:22:19.061734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.943 [2024-07-23 08:22:19.299866] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.943 [2024-07-23 08:22:19.316140] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:09:06.943 INFO: Running with entropic power schedule (0xFF, 100). 00:09:06.943 INFO: Seed: 3340161380 00:09:06.943 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:06.943 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:06.943 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:06.943 INFO: A corpus is not provided, starting from an empty corpus 00:09:06.943 #2 INITED exec/s: 0 rss: 202Mb 00:09:06.943 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:06.943 This may also happen if the target rejected all inputs we tried so far 00:09:06.943 [2024-07-23 08:22:19.384300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.943 [2024-07-23 08:22:19.384340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.943 [2024-07-23 08:22:19.384444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.943 [2024-07-23 08:22:19.384461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.943 [2024-07-23 08:22:19.384552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:06.943 [2024-07-23 08:22:19.384565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.202 NEW_FUNC[1/699]: 0x55dc00 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:09:07.202 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:07.202 [2024-07-23 08:22:19.575050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.575105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.202 [2024-07-23 08:22:19.575223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.575240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.202 [2024-07-23 08:22:19.575351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.575366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.202 #12 NEW cov: 12020 ft: 11978 corp: 2/27b lim: 40 exec/s: 0 rss: 219Mb L: 26/26 MS: 4 InsertByte-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:09:07.202 [2024-07-23 08:22:19.654948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.654984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.202 [2024-07-23 08:22:19.655106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c40000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.655124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.202 [2024-07-23 08:22:19.655226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.655244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.202 [2024-07-23 08:22:19.655341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.202 [2024-07-23 08:22:19.655364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.725028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.725062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.725167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c40000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.725187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.725288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.725305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.725402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.725417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.461 #14 NEW cov: 12044 ft: 12787 corp: 3/63b lim: 40 exec/s: 0 rss: 220Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:09:07.461 [2024-07-23 08:22:19.799983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.800013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.800119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.800137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.800234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.800249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.800349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.800365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.461 [2024-07-23 08:22:19.860070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.461 [2024-07-23 08:22:19.860101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.860214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.860248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.860353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.860376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.860479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.860498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.462 #17 NEW cov: 12056 ft: 13302 corp: 4/96b lim: 40 exec/s: 0 rss: 222Mb L: 33/36 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:07.462 [2024-07-23 08:22:19.928701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7abe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.928730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.928832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.928849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.928944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.928962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.929061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.929078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.979092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a7abe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.979126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.979214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.979229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.979332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.979350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.462 [2024-07-23 08:22:19.979454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:bebebebe cdw11:bebebebe SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.462 [2024-07-23 08:22:19.979468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.721 #22 NEW cov: 12142 ft: 13513 corp: 5/135b lim: 40 exec/s: 0 rss: 224Mb L: 39/39 MS: 4 CopyPart-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:09:07.721 [2024-07-23 08:22:20.040720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.040761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.721 [2024-07-23 08:22:20.040858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.040876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.721 [2024-07-23 08:22:20.090989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.091020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.721 [2024-07-23 08:22:20.091144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.091162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.721 #27 NEW cov: 12142 ft: 13881 corp: 6/156b lim: 40 exec/s: 0 rss: 225Mb L: 21/39 MS: 4 ChangeBinInt-CrossOver-CrossOver-InsertRepeatedBytes- 00:09:07.721 [2024-07-23 08:22:20.166610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.166644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.721 [2024-07-23 08:22:20.166749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.166768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.721 [2024-07-23 08:22:20.236657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.236691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.721 [2024-07-23 08:22:20.236803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.721 [2024-07-23 08:22:20.236821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.979 #29 NEW cov: 12142 ft: 13942 corp: 7/177b lim: 40 exec/s: 0 rss: 227Mb L: 21/39 MS: 1 ChangeBit- 00:09:07.979 [2024-07-23 08:22:20.310441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.979 [2024-07-23 08:22:20.310474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.979 [2024-07-23 08:22:20.310578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.310598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.310697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.310716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.310814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.310830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.380316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.380348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.380452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.380472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.380580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.380598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.380707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.380734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.980 #31 NEW cov: 12142 ft: 13983 corp: 8/210b lim: 40 exec/s: 31 rss: 228Mb L: 33/39 MS: 1 CopyPart- 00:09:07.980 [2024-07-23 08:22:20.451171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.451202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.451312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.451329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.980 [2024-07-23 08:22:20.451439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:07.980 [2024-07-23 08:22:20.451457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.501243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.501273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.501374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.501390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.501500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.501518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.239 #33 NEW cov: 12142 ft: 14028 corp: 9/236b lim: 40 exec/s: 33 rss: 229Mb L: 26/39 MS: 1 ShuffleBytes- 00:09:08.239 [2024-07-23 08:22:20.577331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.577365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.577469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.577488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.577610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.577628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.577732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.577750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.627536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.627571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.627681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.627701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.627799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.627816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.627918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e2e2e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.627935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.239 #39 NEW cov: 12142 ft: 14151 corp: 10/274b lim: 40 exec/s: 39 rss: 230Mb L: 38/39 MS: 5 CopyPart-CrossOver-CopyPart-InsertByte-InsertRepeatedBytes- 00:09:08.239 [2024-07-23 08:22:20.689785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.689820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.689923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.689940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.239 [2024-07-23 08:22:20.740053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.239 [2024-07-23 08:22:20.740083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.240 [2024-07-23 08:22:20.740196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.240 [2024-07-23 08:22:20.740228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.499 #41 NEW cov: 12142 ft: 14194 corp: 11/295b lim: 40 exec/s: 41 rss: 231Mb L: 21/39 MS: 1 ChangeBit- 00:09:08.499 [2024-07-23 08:22:20.802837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.802870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.802980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.802998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.852845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.852876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.852991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.853010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.499 #43 NEW cov: 12142 ft: 14200 corp: 12/318b lim: 40 exec/s: 43 rss: 233Mb L: 23/39 MS: 1 CopyPart- 00:09:08.499 [2024-07-23 08:22:20.915824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.915857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.915973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff13 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.915990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.916092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.916113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.916213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.916230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.986001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.986031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.986125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff13 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.986144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.986253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.986270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.499 [2024-07-23 08:22:20.986370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.499 [2024-07-23 08:22:20.986387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:08.758 #50 NEW cov: 12142 ft: 14218 corp: 13/352b lim: 40 exec/s: 50 rss: 235Mb L: 34/39 MS: 1 InsertByte- 00:09:08.758 [2024-07-23 08:22:21.062804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c42c cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.062840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.062946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.062967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.063082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.063107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.758 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:08.758 [2024-07-23 08:22:21.132752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f2c4c42c cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.132781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.132892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.132908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.133004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:c4c4c4c4 cdw11:c4c4c4c4 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.133018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.758 #52 NEW cov: 12159 ft: 14285 corp: 14/378b lim: 40 exec/s: 52 rss: 236Mb L: 26/39 MS: 1 ChangeByte- 00:09:08.758 [2024-07-23 08:22:21.205710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.205748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.205856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.205878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.255678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.255706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.758 [2024-07-23 08:22:21.255802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000000f8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:08.758 [2024-07-23 08:22:21.255818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.017 #54 NEW cov: 12159 ft: 14333 corp: 15/395b lim: 40 exec/s: 54 rss: 238Mb L: 17/39 MS: 1 EraseBytes- 00:09:09.017 [2024-07-23 08:22:21.318232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.017 [2024-07-23 08:22:21.318272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.017 [2024-07-23 08:22:21.318384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.017 [2024-07-23 08:22:21.318406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.017 [2024-07-23 08:22:21.388287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.017 [2024-07-23 08:22:21.388316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.017 [2024-07-23 08:22:21.388430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:09.017 [2024-07-23 08:22:21.388446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.017 #56 NEW cov: 12159 ft: 14401 corp: 16/418b lim: 40 exec/s: 28 rss: 239Mb L: 23/39 MS: 1 CrossOver- 00:09:09.017 #56 DONE cov: 12159 ft: 14401 corp: 16/418b lim: 40 exec/s: 28 rss: 239Mb 00:09:09.017 Done 56 runs in 2 second(s) 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:09.584 08:22:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:09:09.584 [2024-07-23 08:22:21.897419] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:09.584 [2024-07-23 08:22:21.897506] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628762 ] 00:09:09.584 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.843 [2024-07-23 08:22:22.121914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.843 [2024-07-23 08:22:22.269663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.102 [2024-07-23 08:22:22.506598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.102 [2024-07-23 08:22:22.522884] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:09:10.102 INFO: Running with entropic power schedule (0xFF, 100). 00:09:10.102 INFO: Seed: 2252216656 00:09:10.102 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:10.102 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:10.102 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:10.102 INFO: A corpus is not provided, starting from an empty corpus 00:09:10.102 #2 INITED exec/s: 0 rss: 202Mb 00:09:10.102 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:10.102 This may also happen if the target rejected all inputs we tried so far 00:09:10.102 [2024-07-23 08:22:22.590050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.102 [2024-07-23 08:22:22.590092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.361 NEW_FUNC[1/697]: 0x55fb30 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:09:10.361 NEW_FUNC[2/697]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:10.361 [2024-07-23 08:22:22.770660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.361 [2024-07-23 08:22:22.770706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.361 #9 NEW cov: 12006 ft: 11977 corp: 2/11b lim: 40 exec/s: 0 rss: 219Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:09:10.361 [2024-07-23 08:22:22.848001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.361 [2024-07-23 08:22:22.848048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.361 NEW_FUNC[1/1]: 0x1b3f110 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1528 00:09:10.620 [2024-07-23 08:22:22.898306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.620 [2024-07-23 08:22:22.898338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.620 #11 NEW cov: 12032 ft: 12625 corp: 3/20b lim: 40 exec/s: 0 rss: 220Mb L: 9/10 MS: 1 CMP- DE: "1\257\256\177+\\\026\000"- 00:09:10.620 [2024-07-23 08:22:22.973063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.620 [2024-07-23 08:22:22.973096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.620 [2024-07-23 08:22:23.042992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.620 [2024-07-23 08:22:23.043026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.620 #13 NEW cov: 12044 ft: 12883 corp: 4/29b lim: 40 exec/s: 0 rss: 222Mb L: 9/10 MS: 1 CrossOver- 00:09:10.620 [2024-07-23 08:22:23.116009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3147ae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.620 [2024-07-23 08:22:23.116044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.879 [2024-07-23 08:22:23.185840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a3147ae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.185868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.880 #15 NEW cov: 12130 ft: 13279 corp: 5/38b lim: 40 exec/s: 0 rss: 223Mb L: 9/10 MS: 1 ChangeBinInt- 00:09:10.880 [2024-07-23 08:22:23.258815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.258844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.880 [2024-07-23 08:22:23.258933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.258953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.880 [2024-07-23 08:22:23.259046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.259061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.880 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:10.880 [2024-07-23 08:22:23.308569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.308597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.880 [2024-07-23 08:22:23.308684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.308700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.880 [2024-07-23 08:22:23.308786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.308801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.880 #18 NEW cov: 12147 ft: 13709 corp: 6/67b lim: 40 exec/s: 0 rss: 225Mb L: 29/29 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:10.880 [2024-07-23 08:22:23.377206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.377239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.880 [2024-07-23 08:22:23.377329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31afae7f cdw11:2b5c1600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.880 [2024-07-23 08:22:23.377360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.138 [2024-07-23 08:22:23.427073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.138 [2024-07-23 08:22:23.427109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.138 [2024-07-23 08:22:23.427211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:31afae7f cdw11:2b5c1600 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.138 [2024-07-23 08:22:23.427228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.138 #20 NEW cov: 12147 ft: 13962 corp: 7/84b lim: 40 exec/s: 0 rss: 226Mb L: 17/29 MS: 1 PersAutoDict- DE: "1\257\256\177+\\\026\000"- 00:09:11.138 [2024-07-23 08:22:23.495425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f6c9afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.138 [2024-07-23 08:22:23.495457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.138 [2024-07-23 08:22:23.545524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f6c9afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.138 [2024-07-23 08:22:23.545553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.138 #22 NEW cov: 12147 ft: 14085 corp: 8/93b lim: 40 exec/s: 22 rss: 227Mb L: 9/29 MS: 1 ChangeBinInt- 00:09:11.138 [2024-07-23 08:22:23.609094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2731afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.138 [2024-07-23 08:22:23.609129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.659141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2731afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.659170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.397 #25 NEW cov: 12147 ft: 14225 corp: 9/102b lim: 40 exec/s: 25 rss: 229Mb L: 9/29 MS: 2 ChangeByte-CrossOver- 00:09:11.397 [2024-07-23 08:22:23.726481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6d31af cdw11:ae7f2b5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.726514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.776515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a6d31af cdw11:ae7f2b5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.776545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.397 #27 NEW cov: 12147 ft: 14332 corp: 10/112b lim: 40 exec/s: 27 rss: 230Mb L: 10/29 MS: 1 InsertByte- 00:09:11.397 [2024-07-23 08:22:23.838544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.838572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.838665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.838683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.838771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.838787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.838877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.838891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.838986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000460a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.839003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.888460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.888488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.888581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.888600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.888686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.888702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.888790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.888806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.397 [2024-07-23 08:22:23.888896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000460a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.397 [2024-07-23 08:22:23.888913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:11.397 #31 NEW cov: 12147 ft: 14852 corp: 11/152b lim: 40 exec/s: 31 rss: 231Mb L: 40/40 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:09:11.656 [2024-07-23 08:22:23.950432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a4b4b4b cdw11:31afae7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:23.950463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:23.950546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2b5c1600 cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:23.950562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:23.950645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:23.950660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:23.950765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:23.950780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:24.020595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a4b4b4b cdw11:31afae7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:24.020623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:24.020710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:2b5c1600 cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:24.020727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:24.020812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:24.020827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:24.020918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:4b4b4b4b cdw11:4b4b4b4b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:24.020934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.656 #33 NEW cov: 12147 ft: 14876 corp: 12/189b lim: 40 exec/s: 33 rss: 232Mb L: 37/40 MS: 1 PersAutoDict- DE: "1\257\256\177+\\\026\000"- 00:09:11.656 [2024-07-23 08:22:24.087546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f624c9af cdw11:ae7f2b5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:24.087578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.656 [2024-07-23 08:22:24.157520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:f624c9af cdw11:ae7f2b5c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.656 [2024-07-23 08:22:24.157549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.915 #35 NEW cov: 12147 ft: 14929 corp: 13/199b lim: 40 exec/s: 35 rss: 234Mb L: 10/40 MS: 1 InsertByte- 00:09:11.915 [2024-07-23 08:22:24.232166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2731afae cdw11:7f2b165c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.915 [2024-07-23 08:22:24.232197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.915 [2024-07-23 08:22:24.302189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2731afae cdw11:7f2b165c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.915 [2024-07-23 08:22:24.302219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.915 #37 NEW cov: 12147 ft: 14978 corp: 14/208b lim: 40 exec/s: 37 rss: 235Mb L: 9/40 MS: 1 ShuffleBytes- 00:09:11.916 [2024-07-23 08:22:24.378206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.916 [2024-07-23 08:22:24.378241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.173 [2024-07-23 08:22:24.448011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a31afae cdw11:7f2b5c16 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.173 [2024-07-23 08:22:24.448046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.173 #39 NEW cov: 12147 ft: 14988 corp: 15/222b lim: 40 exec/s: 39 rss: 237Mb L: 14/40 MS: 1 EraseBytes- 00:09:12.173 [2024-07-23 08:22:24.520774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:cef624c9 cdw11:afae7f2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.173 [2024-07-23 08:22:24.520813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.173 [2024-07-23 08:22:24.590629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:cef624c9 cdw11:afae7f2b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:12.173 [2024-07-23 08:22:24.590657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.173 #41 NEW cov: 12147 ft: 15010 corp: 16/233b lim: 40 exec/s: 20 rss: 238Mb L: 11/40 MS: 1 InsertByte- 00:09:12.173 #41 DONE cov: 12147 ft: 15010 corp: 16/233b lim: 40 exec/s: 20 rss: 239Mb 00:09:12.173 ###### Recommended dictionary. ###### 00:09:12.173 "1\257\256\177+\\\026\000" # Uses: 2 00:09:12.173 ###### End of recommended dictionary. ###### 00:09:12.173 Done 41 runs in 2 second(s) 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:12.740 08:22:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:09:12.740 [2024-07-23 08:22:25.101155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:12.740 [2024-07-23 08:22:25.101242] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629220 ] 00:09:12.740 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.999 [2024-07-23 08:22:25.319976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.999 [2024-07-23 08:22:25.468354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.258 [2024-07-23 08:22:25.707900] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.258 [2024-07-23 08:22:25.724170] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:09:13.258 INFO: Running with entropic power schedule (0xFF, 100). 00:09:13.258 INFO: Seed: 1157272612 00:09:13.258 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:13.258 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:13.258 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:13.258 INFO: A corpus is not provided, starting from an empty corpus 00:09:13.258 #2 INITED exec/s: 0 rss: 202Mb 00:09:13.258 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:13.258 This may also happen if the target rejected all inputs we tried so far 00:09:13.517 [2024-07-23 08:22:25.780010] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.517 [2024-07-23 08:22:25.780047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.517 NEW_FUNC[1/697]: 0x561a60 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:09:13.517 NEW_FUNC[2/697]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:13.517 [2024-07-23 08:22:25.960714] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.517 [2024-07-23 08:22:25.960779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.517 #29 NEW cov: 11994 ft: 11964 corp: 2/8b lim: 35 exec/s: 0 rss: 218Mb L: 7/7 MS: 5 ChangeByte-ChangeByte-CopyPart-CrossOver-InsertRepeatedBytes- 00:09:13.517 [2024-07-23 08:22:26.035994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.517 [2024-07-23 08:22:26.036041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.775 NEW_FUNC[1/2]: 0x1655490 in nvmf_tcp_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3417 00:09:13.775 NEW_FUNC[2/2]: 0x1b3f110 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1528 00:09:13.775 [2024-07-23 08:22:26.095716] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.775 [2024-07-23 08:22:26.095748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.775 #31 NEW cov: 12026 ft: 12399 corp: 3/15b lim: 35 exec/s: 0 rss: 220Mb L: 7/7 MS: 1 ShuffleBytes- 00:09:13.775 [2024-07-23 08:22:26.154454] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.775 [2024-07-23 08:22:26.154490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.775 [2024-07-23 08:22:26.194154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.775 [2024-07-23 08:22:26.194183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.775 #35 NEW cov: 12038 ft: 12828 corp: 4/23b lim: 35 exec/s: 0 rss: 221Mb L: 8/8 MS: 3 ChangeBit-ShuffleBytes-CrossOver- 00:09:13.775 [2024-07-23 08:22:26.254573] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.775 [2024-07-23 08:22:26.254605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.034 [2024-07-23 08:22:26.314671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.034 [2024-07-23 08:22:26.314700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.034 #37 NEW cov: 12124 ft: 13290 corp: 5/30b lim: 35 exec/s: 0 rss: 224Mb L: 7/8 MS: 1 CrossOver- 00:09:14.034 [2024-07-23 08:22:26.375633] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.034 [2024-07-23 08:22:26.375667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.034 [2024-07-23 08:22:26.435675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.034 [2024-07-23 08:22:26.435704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.034 #39 NEW cov: 12124 ft: 13352 corp: 6/38b lim: 35 exec/s: 0 rss: 225Mb L: 8/8 MS: 1 CMP- DE: "\000\015"- 00:09:14.034 [2024-07-23 08:22:26.506592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.034 [2024-07-23 08:22:26.506622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.035 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:14.035 [2024-07-23 08:22:26.546665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.035 [2024-07-23 08:22:26.546696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.293 #41 NEW cov: 12141 ft: 13475 corp: 7/45b lim: 35 exec/s: 0 rss: 226Mb L: 7/8 MS: 1 CrossOver- 00:09:14.293 [2024-07-23 08:22:26.618359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.293 [2024-07-23 08:22:26.618394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.293 [2024-07-23 08:22:26.658334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.293 [2024-07-23 08:22:26.658362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.293 #48 NEW cov: 12141 ft: 13489 corp: 8/52b lim: 35 exec/s: 0 rss: 228Mb L: 7/8 MS: 1 ChangeByte- 00:09:14.293 [2024-07-23 08:22:26.719676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.293 [2024-07-23 08:22:26.719710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.293 [2024-07-23 08:22:26.779425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.293 [2024-07-23 08:22:26.779456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.551 #50 NEW cov: 12141 ft: 13531 corp: 9/59b lim: 35 exec/s: 50 rss: 230Mb L: 7/8 MS: 1 PersAutoDict- DE: "\000\015"- 00:09:14.551 [2024-07-23 08:22:26.842144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.842180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.551 [2024-07-23 08:22:26.842254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.842272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.551 [2024-07-23 08:22:26.842336] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.842353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.551 NEW_FUNC[1/2]: 0x5873e0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:14.551 NEW_FUNC[2/2]: 0x14a90b0 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1765 00:09:14.551 [2024-07-23 08:22:26.901654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.901686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.551 [2024-07-23 08:22:26.901749] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.901764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.551 [2024-07-23 08:22:26.901825] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.901839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.551 #52 NEW cov: 12174 ft: 14486 corp: 10/89b lim: 35 exec/s: 52 rss: 231Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:09:14.551 [2024-07-23 08:22:26.972558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:26.972591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.551 [2024-07-23 08:22:27.022696] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.551 [2024-07-23 08:22:27.022726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.551 #54 NEW cov: 12174 ft: 14663 corp: 11/98b lim: 35 exec/s: 54 rss: 232Mb L: 9/30 MS: 1 PersAutoDict- DE: "\000\015"- 00:09:14.809 [2024-07-23 08:22:27.084140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.084175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.809 [2024-07-23 08:22:27.143848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.143877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.809 #56 NEW cov: 12174 ft: 14710 corp: 12/105b lim: 35 exec/s: 56 rss: 234Mb L: 7/30 MS: 1 ChangeBit- 00:09:14.809 [2024-07-23 08:22:27.205723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.205757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.809 [2024-07-23 08:22:27.205824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.205839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.809 [2024-07-23 08:22:27.205912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.205928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.809 [2024-07-23 08:22:27.265320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.265348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.809 [2024-07-23 08:22:27.265407] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.265421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.809 [2024-07-23 08:22:27.265477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.809 [2024-07-23 08:22:27.265490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.809 #58 NEW cov: 12174 ft: 14844 corp: 13/135b lim: 35 exec/s: 58 rss: 235Mb L: 30/30 MS: 1 ShuffleBytes- 00:09:14.810 [2024-07-23 08:22:27.326667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.810 [2024-07-23 08:22:27.326699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.068 [2024-07-23 08:22:27.366334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000007e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.068 [2024-07-23 08:22:27.366363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.068 #60 NEW cov: 12174 ft: 14928 corp: 14/142b lim: 35 exec/s: 60 rss: 236Mb L: 7/30 MS: 1 ChangeByte- 00:09:15.068 [2024-07-23 08:22:27.427781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.068 [2024-07-23 08:22:27.427813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.068 [2024-07-23 08:22:27.487419] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000004a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.068 [2024-07-23 08:22:27.487449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.068 #62 NEW cov: 12174 ft: 14950 corp: 15/151b lim: 35 exec/s: 62 rss: 238Mb L: 9/30 MS: 1 InsertByte- 00:09:15.068 [2024-07-23 08:22:27.549243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.068 [2024-07-23 08:22:27.549279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.327 [2024-07-23 08:22:27.588870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.327 [2024-07-23 08:22:27.588900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.327 #64 NEW cov: 12174 ft: 15054 corp: 16/167b lim: 35 exec/s: 64 rss: 239Mb L: 16/30 MS: 1 EraseBytes- 00:09:15.327 [2024-07-23 08:22:27.659551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.327 [2024-07-23 08:22:27.659584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.327 [2024-07-23 08:22:27.699701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.327 [2024-07-23 08:22:27.699729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.327 #66 NEW cov: 12174 ft: 15105 corp: 17/176b lim: 35 exec/s: 66 rss: 240Mb L: 9/30 MS: 1 PersAutoDict- DE: "\000\015"- 00:09:15.327 NEW_FUNC[1/3]: 0x582430 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:09:15.327 NEW_FUNC[2/3]: 0x149c130 in temp_threshold_opts_valid /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1640 00:09:15.327 #68 NEW cov: 12228 ft: 15168 corp: 18/183b lim: 35 exec/s: 34 rss: 241Mb L: 7/30 MS: 1 ChangeBit- 00:09:15.327 #68 DONE cov: 12228 ft: 15168 corp: 18/183b lim: 35 exec/s: 34 rss: 241Mb 00:09:15.327 ###### Recommended dictionary. ###### 00:09:15.327 "\000\015" # Uses: 3 00:09:15.327 ###### End of recommended dictionary. ###### 00:09:15.327 Done 68 runs in 2 second(s) 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:15.895 08:22:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:09:15.895 [2024-07-23 08:22:28.284237] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:15.895 [2024-07-23 08:22:28.284342] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629697 ] 00:09:15.895 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.154 [2024-07-23 08:22:28.510154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.154 [2024-07-23 08:22:28.660533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.412 [2024-07-23 08:22:28.898113] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.412 [2024-07-23 08:22:28.914393] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:09:16.412 INFO: Running with entropic power schedule (0xFF, 100). 00:09:16.412 INFO: Seed: 52280275 00:09:16.671 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:16.671 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:16.671 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:16.671 INFO: A corpus is not provided, starting from an empty corpus 00:09:16.671 #2 INITED exec/s: 0 rss: 202Mb 00:09:16.671 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:16.671 This may also happen if the target rejected all inputs we tried so far 00:09:16.671 [2024-07-23 08:22:28.974397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:28.974429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.671 [2024-07-23 08:22:28.974486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:28.974502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.671 [2024-07-23 08:22:28.974555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:28.974568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.671 [2024-07-23 08:22:28.974621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:28.974634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.671 NEW_FUNC[1/697]: 0x5631b0 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:09:16.671 NEW_FUNC[2/697]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:16.671 [2024-07-23 08:22:29.155007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:29.155063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.671 [2024-07-23 08:22:29.155140] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:29.155159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.671 [2024-07-23 08:22:29.155220] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:29.155240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.671 [2024-07-23 08:22:29.155303] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.671 [2024-07-23 08:22:29.155319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.930 #26 NEW cov: 11986 ft: 11990 corp: 2/33b lim: 35 exec/s: 0 rss: 219Mb L: 32/32 MS: 3 CrossOver-EraseBytes-InsertRepeatedBytes- 00:09:16.930 [2024-07-23 08:22:29.228658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.228700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.228763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.228779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.228842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.228859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.228920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.228936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.930 NEW_FUNC[1/1]: 0x22016b0 in spdk_thread_get_last_tsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1324 00:09:16.930 [2024-07-23 08:22:29.288377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.288409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.288464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.288482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.288540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.288553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.288608] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.288624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.930 #28 NEW cov: 12014 ft: 12522 corp: 3/67b lim: 35 exec/s: 0 rss: 220Mb L: 34/34 MS: 1 CrossOver- 00:09:16.930 [2024-07-23 08:22:29.356725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.356758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.356820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.356835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.356889] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.356904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.356958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.356972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.357031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.357045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.416794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.416824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.416886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.416900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.416955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.416968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.930 [2024-07-23 08:22:29.417025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.930 [2024-07-23 08:22:29.417039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.931 [2024-07-23 08:22:29.417101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.931 [2024-07-23 08:22:29.417115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.931 #30 NEW cov: 12026 ft: 12909 corp: 4/102b lim: 35 exec/s: 0 rss: 222Mb L: 35/35 MS: 1 InsertByte- 00:09:17.189 [2024-07-23 08:22:29.467645] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.467678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.467742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.467761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.467827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.467842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.467901] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.467916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.467973] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.467987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.507619] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.507649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.507709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.507723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.507780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.189 [2024-07-23 08:22:29.507794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.189 [2024-07-23 08:22:29.507849] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.507863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.507920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.507933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.190 #32 NEW cov: 12112 ft: 13318 corp: 5/137b lim: 35 exec/s: 0 rss: 224Mb L: 35/35 MS: 1 CopyPart- 00:09:17.190 [2024-07-23 08:22:29.568074] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.568112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.568176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.568192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.190 NEW_FUNC[1/1]: 0x5873e0 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:09:17.190 [2024-07-23 08:22:29.607752] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.607782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.607841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.607859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.190 #37 NEW cov: 12126 ft: 13914 corp: 6/161b lim: 35 exec/s: 0 rss: 225Mb L: 24/35 MS: 4 InsertByte-EraseBytes-CopyPart-InsertRepeatedBytes- 00:09:17.190 [2024-07-23 08:22:29.664564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.664609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.664694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.664717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.664794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.664815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.664891] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.664913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.190 [2024-07-23 08:22:29.664987] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.190 [2024-07-23 08:22:29.665008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.190 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:17.449 [2024-07-23 08:22:29.723820] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.723849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.723912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.723926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.723982] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.723995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.724049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.724062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.724118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.724132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.449 #39 NEW cov: 12143 ft: 14060 corp: 7/196b lim: 35 exec/s: 0 rss: 226Mb L: 35/35 MS: 1 ChangeBit- 00:09:17.449 [2024-07-23 08:22:29.783119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.783156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.783228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.783251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.783319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.783338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.783405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.783423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.783492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.783511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.822350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.822379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.822437] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.822452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.822506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.822519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.449 [2024-07-23 08:22:29.822570] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.449 [2024-07-23 08:22:29.822585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.822637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.822650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.450 #41 NEW cov: 12143 ft: 14152 corp: 8/231b lim: 35 exec/s: 0 rss: 228Mb L: 35/35 MS: 1 ChangeByte- 00:09:17.450 [2024-07-23 08:22:29.877908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.877939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.878002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.878017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.878073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.878087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.878151] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.878169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.917428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.917456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.917514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.917528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.917585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.917598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.450 [2024-07-23 08:22:29.917653] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.450 [2024-07-23 08:22:29.917666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.450 #48 NEW cov: 12143 ft: 14193 corp: 9/262b lim: 35 exec/s: 0 rss: 229Mb L: 31/35 MS: 1 EraseBytes- 00:09:17.709 [2024-07-23 08:22:29.969956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:29.969988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:29.970049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:29.970064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:29.970132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:29.970147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:29.970214] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:29.970228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:29.970288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:29.970303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.030563] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.030604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.030666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.030681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.030739] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.030753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.030815] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.030828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.030885] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.030898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.709 #50 NEW cov: 12143 ft: 14221 corp: 10/297b lim: 35 exec/s: 50 rss: 230Mb L: 35/35 MS: 1 CopyPart- 00:09:17.709 [2024-07-23 08:22:30.075160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.075205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.075274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.075293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.115189] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.115220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.115282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.115297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.709 #52 NEW cov: 12143 ft: 14466 corp: 11/315b lim: 35 exec/s: 52 rss: 232Mb L: 18/35 MS: 1 EraseBytes- 00:09:17.709 [2024-07-23 08:22:30.175850] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.175886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.175951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.175968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.176031] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.176048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.176119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.176135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.709 [2024-07-23 08:22:30.176178] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.709 [2024-07-23 08:22:30.176192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.235420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.235451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.235509] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.235527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.235587] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.235601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.235656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.235670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.235723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.235736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.968 #54 NEW cov: 12143 ft: 14567 corp: 12/350b lim: 35 exec/s: 54 rss: 234Mb L: 35/35 MS: 1 CopyPart- 00:09:17.968 [2024-07-23 08:22:30.296956] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.296989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.297053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.297070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.297138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.297155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.297215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.297230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.336537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.336566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.336643] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.336658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.336722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.336736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.336790] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.336804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.968 #56 NEW cov: 12143 ft: 14682 corp: 13/382b lim: 35 exec/s: 56 rss: 235Mb L: 32/35 MS: 1 ShuffleBytes- 00:09:17.968 [2024-07-23 08:22:30.387155] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.387191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.387254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.387269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.387332] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.387346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.387400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.387414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.387474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.387488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.447152] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.447180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.447239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.447254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.447306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.447319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.447372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.968 [2024-07-23 08:22:30.447386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:17.968 [2024-07-23 08:22:30.447440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.969 [2024-07-23 08:22:30.447454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:17.969 #58 NEW cov: 12143 ft: 14705 corp: 14/417b lim: 35 exec/s: 58 rss: 236Mb L: 35/35 MS: 1 ChangeByte- 00:09:18.228 [2024-07-23 08:22:30.497628] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.497660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.497724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.497740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.497796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.497811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.557654] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000000dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.557683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.557740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.557790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.557848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.557861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.228 #62 NEW cov: 12143 ft: 14829 corp: 15/438b lim: 35 exec/s: 62 rss: 237Mb L: 21/35 MS: 3 CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:09:18.228 [2024-07-23 08:22:30.608358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.608389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.608447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.608461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.608518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.608533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.608590] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.608604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.608667] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.608681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.648356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.648384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.648441] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.648456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.648512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.648525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.648585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.648598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.648656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.648670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.228 #64 NEW cov: 12143 ft: 14870 corp: 16/473b lim: 35 exec/s: 64 rss: 239Mb L: 35/35 MS: 1 CrossOver- 00:09:18.228 [2024-07-23 08:22:30.700161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.700193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.700254] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.700270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.700333] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.700347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.700405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.700420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.228 [2024-07-23 08:22:30.700485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.228 [2024-07-23 08:22:30.700501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.487 [2024-07-23 08:22:30.760192] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.487 [2024-07-23 08:22:30.760220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.487 [2024-07-23 08:22:30.760285] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.487 [2024-07-23 08:22:30.760300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.487 [2024-07-23 08:22:30.760354] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.487 [2024-07-23 08:22:30.760367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.487 [2024-07-23 08:22:30.760422] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.760436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.760490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.760504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.488 #66 NEW cov: 12143 ft: 14907 corp: 17/508b lim: 35 exec/s: 66 rss: 240Mb L: 35/35 MS: 1 CopyPart- 00:09:18.488 [2024-07-23 08:22:30.817647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.817678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.817737] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.817756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.817816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.817830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.817884] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.817898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.817955] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.817968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.857309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.857337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.857399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.857414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.857469] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.857482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.857537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.857551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.857605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:000006dd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.857618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:18.488 #73 NEW cov: 12143 ft: 14927 corp: 18/543b lim: 35 exec/s: 73 rss: 241Mb L: 35/35 MS: 1 ChangeBit- 00:09:18.488 [2024-07-23 08:22:30.905870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.905912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.905990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.906013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.965198] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.965227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.488 [2024-07-23 08:22:30.965286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006cb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.488 [2024-07-23 08:22:30.965301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.488 #75 NEW cov: 12143 ft: 15047 corp: 19/567b lim: 35 exec/s: 37 rss: 243Mb L: 24/35 MS: 1 ChangeBit- 00:09:18.488 #75 DONE cov: 12143 ft: 15047 corp: 19/567b lim: 35 exec/s: 37 rss: 243Mb 00:09:18.488 Done 75 runs in 2 second(s) 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:19.056 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:19.057 08:22:31 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:09:19.057 [2024-07-23 08:22:31.474418] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:19.057 [2024-07-23 08:22:31.474525] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630329 ] 00:09:19.057 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.316 [2024-07-23 08:22:31.692404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.575 [2024-07-23 08:22:31.841199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.575 [2024-07-23 08:22:32.078210] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.575 [2024-07-23 08:22:32.094515] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:09:19.835 INFO: Running with entropic power schedule (0xFF, 100). 00:09:19.835 INFO: Seed: 3232296050 00:09:19.835 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:19.835 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:19.835 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:19.835 INFO: A corpus is not provided, starting from an empty corpus 00:09:19.835 #2 INITED exec/s: 0 rss: 201Mb 00:09:19.835 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:19.835 This may also happen if the target rejected all inputs we tried so far 00:09:19.835 [2024-07-23 08:22:32.154341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.154377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.835 [2024-07-23 08:22:32.154420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.154439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:19.835 [2024-07-23 08:22:32.154483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.154499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:19.835 [2024-07-23 08:22:32.154550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.154566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:19.835 NEW_FUNC[1/699]: 0x564870 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:09:19.835 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:19.835 [2024-07-23 08:22:32.335370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.335427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:19.835 [2024-07-23 08:22:32.335499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.335522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:19.835 [2024-07-23 08:22:32.335585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.335606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:19.835 [2024-07-23 08:22:32.335666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.835 [2024-07-23 08:22:32.335687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.094 #9 NEW cov: 12095 ft: 12068 corp: 2/99b lim: 105 exec/s: 0 rss: 218Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:09:20.094 [2024-07-23 08:22:32.412239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.412289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.412353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.412376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.412440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.412469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.451858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.451890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.451933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.451950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.452001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.452017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.094 #18 NEW cov: 12119 ft: 13046 corp: 3/162b lim: 105 exec/s: 0 rss: 219Mb L: 63/98 MS: 3 CMP-InsertByte-InsertRepeatedBytes- DE: "\037\000\000\000"- 00:09:20.094 [2024-07-23 08:22:32.520780] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.520816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.520862] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.520880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.520933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.520951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.521006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.521025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.580955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.580986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.581038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.581055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.581113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.581128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.094 [2024-07-23 08:22:32.581181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.094 [2024-07-23 08:22:32.581197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.353 #20 NEW cov: 12131 ft: 13432 corp: 4/260b lim: 105 exec/s: 0 rss: 222Mb L: 98/98 MS: 1 ShuffleBytes- 00:09:20.353 [2024-07-23 08:22:32.641938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.641974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.642031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.642046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.642114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.642133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.701616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.701648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.701687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.701703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.701755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.701786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.353 #22 NEW cov: 12217 ft: 13861 corp: 5/327b lim: 105 exec/s: 0 rss: 224Mb L: 67/98 MS: 1 CMP- DE: "\000\000\000R"- 00:09:20.353 [2024-07-23 08:22:32.762914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.762950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.763010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297697441077701290 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.763027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.763092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.763116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.822650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.822680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.822733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297697441077701290 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.822750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.353 [2024-07-23 08:22:32.822806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.353 [2024-07-23 08:22:32.822821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.353 #24 NEW cov: 12217 ft: 14074 corp: 6/395b lim: 105 exec/s: 0 rss: 226Mb L: 68/98 MS: 1 InsertByte- 00:09:20.612 [2024-07-23 08:22:32.883928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:32.883962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:32.884019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:32.884034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:32.884104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:32.884121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.612 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:20.612 [2024-07-23 08:22:32.943638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:32.943670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:32.943714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:32.943733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:32.943787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:32.943803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.612 #26 NEW cov: 12234 ft: 14178 corp: 7/463b lim: 105 exec/s: 0 rss: 227Mb L: 68/98 MS: 1 CopyPart- 00:09:20.612 [2024-07-23 08:22:33.015587] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.015619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:33.015666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.015685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:33.055575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.055606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:33.055646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.055664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.612 #29 NEW cov: 12234 ft: 14517 corp: 8/521b lim: 105 exec/s: 0 rss: 228Mb L: 58/98 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:20.612 [2024-07-23 08:22:33.117008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.117042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:33.117112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.117132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.612 [2024-07-23 08:22:33.117188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829008810879658 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.612 [2024-07-23 08:22:33.117204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.156664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:83 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.156694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.156748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.156766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.156819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12297829008810879658 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.156834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.871 #31 NEW cov: 12234 ft: 14545 corp: 9/589b lim: 105 exec/s: 31 rss: 230Mb L: 68/98 MS: 1 InsertByte- 00:09:20.871 [2024-07-23 08:22:33.217466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.217500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.217556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071142637567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.217573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.277181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.277211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.277275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071142637567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.277291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.871 #33 NEW cov: 12234 ft: 14587 corp: 10/651b lim: 105 exec/s: 33 rss: 231Mb L: 62/98 MS: 1 CMP- DE: "nvmf"- 00:09:20.871 [2024-07-23 08:22:33.349561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.349593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.349653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7523377976271612586 len:26729 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.349669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.349722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7523377975159973992 len:26729 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.349738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.349793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.349808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.389478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.389509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.389559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:7523377976271612586 len:26729 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.389576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.389630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:7523377975159973992 len:26729 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.389647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:20.871 [2024-07-23 08:22:33.389698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:20.871 [2024-07-23 08:22:33.389713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:21.129 #35 NEW cov: 12234 ft: 14695 corp: 11/751b lim: 105 exec/s: 35 rss: 232Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:09:21.129 [2024-07-23 08:22:33.450269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.450304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.129 [2024-07-23 08:22:33.450362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.450380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.129 [2024-07-23 08:22:33.489989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.490019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.129 [2024-07-23 08:22:33.490060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.490076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.129 #37 NEW cov: 12234 ft: 14712 corp: 12/809b lim: 105 exec/s: 37 rss: 233Mb L: 58/100 MS: 1 ChangeByte- 00:09:21.129 [2024-07-23 08:22:33.550845] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.550880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.129 [2024-07-23 08:22:33.550939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.550959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.129 [2024-07-23 08:22:33.590559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:12297829379779570346 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.590595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.129 [2024-07-23 08:22:33.590642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12297829382473034410 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.129 [2024-07-23 08:22:33.590659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.129 #39 NEW cov: 12234 ft: 14720 corp: 13/858b lim: 105 exec/s: 39 rss: 234Mb L: 49/100 MS: 1 EraseBytes- 00:09:21.388 [2024-07-23 08:22:33.652109] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.652147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.652208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.652225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.652283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5956761107124191232 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.652301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.701799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.701830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.701884] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.701900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.701952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:5956761107124191232 len:43691 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.701968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:21.388 #41 NEW cov: 12234 ft: 14727 corp: 14/940b lim: 105 exec/s: 41 rss: 236Mb L: 82/100 MS: 1 CrossOver- 00:09:21.388 [2024-07-23 08:22:33.763285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.763324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.763372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071142637567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.763392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.822984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.823015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.823065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071142637567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.823082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.388 #43 NEW cov: 12234 ft: 14806 corp: 15/1002b lim: 105 exec/s: 43 rss: 237Mb L: 62/100 MS: 1 ChangeByte- 00:09:21.388 [2024-07-23 08:22:33.885175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446743296505020415 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.885212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.388 [2024-07-23 08:22:33.885281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071142637567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.388 [2024-07-23 08:22:33.885302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.646 [2024-07-23 08:22:33.924649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446743296505020415 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.646 [2024-07-23 08:22:33.924682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.646 [2024-07-23 08:22:33.924724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744071142637567 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.646 [2024-07-23 08:22:33.924741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.646 #50 NEW cov: 12234 ft: 14863 corp: 16/1064b lim: 105 exec/s: 50 rss: 238Mb L: 62/100 MS: 1 ChangeByte- 00:09:21.646 [2024-07-23 08:22:33.985938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.647 [2024-07-23 08:22:33.985975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.647 [2024-07-23 08:22:34.025596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.647 [2024-07-23 08:22:34.025627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.647 #52 NEW cov: 12234 ft: 15358 corp: 17/1104b lim: 105 exec/s: 52 rss: 239Mb L: 40/100 MS: 1 CrossOver- 00:09:21.647 [2024-07-23 08:22:34.086831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.647 [2024-07-23 08:22:34.086866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.647 [2024-07-23 08:22:34.086917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.647 [2024-07-23 08:22:34.086936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.647 [2024-07-23 08:22:34.126507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.647 [2024-07-23 08:22:34.126538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:21.647 [2024-07-23 08:22:34.126583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:21.647 [2024-07-23 08:22:34.126598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:21.647 #54 NEW cov: 12234 ft: 15371 corp: 18/1162b lim: 105 exec/s: 27 rss: 241Mb L: 58/100 MS: 1 ChangeByte- 00:09:21.647 #54 DONE cov: 12234 ft: 15371 corp: 18/1162b lim: 105 exec/s: 27 rss: 241Mb 00:09:21.647 ###### Recommended dictionary. ###### 00:09:21.647 "\037\000\000\000" # Uses: 1 00:09:21.647 "\000\000\000R" # Uses: 0 00:09:21.647 "nvmf" # Uses: 1 00:09:21.647 ###### End of recommended dictionary. ###### 00:09:21.647 Done 54 runs in 2 second(s) 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:22.229 08:22:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:09:22.229 [2024-07-23 08:22:34.639660] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:22.229 [2024-07-23 08:22:34.639746] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630789 ] 00:09:22.229 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.488 [2024-07-23 08:22:34.865358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.747 [2024-07-23 08:22:35.016522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.747 [2024-07-23 08:22:35.258845] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.006 [2024-07-23 08:22:35.275119] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:09:23.006 INFO: Running with entropic power schedule (0xFF, 100). 00:09:23.006 INFO: Seed: 2120316416 00:09:23.006 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:23.006 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:23.006 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:23.006 INFO: A corpus is not provided, starting from an empty corpus 00:09:23.006 #2 INITED exec/s: 0 rss: 201Mb 00:09:23.006 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:23.006 This may also happen if the target rejected all inputs we tried so far 00:09:23.006 [2024-07-23 08:22:35.342995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.006 [2024-07-23 08:22:35.343041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.006 [2024-07-23 08:22:35.343127] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.006 [2024-07-23 08:22:35.343145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.006 [2024-07-23 08:22:35.343223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.006 [2024-07-23 08:22:35.343239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.006 [2024-07-23 08:22:35.343331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.006 [2024-07-23 08:22:35.343348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.006 NEW_FUNC[1/699]: 0x568360 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:09:23.006 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:23.265 [2024-07-23 08:22:35.526185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.526230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.265 [2024-07-23 08:22:35.526322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.526337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.265 [2024-07-23 08:22:35.526420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.526439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.265 [2024-07-23 08:22:35.526524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.526543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.265 #8 NEW cov: 12115 ft: 12089 corp: 2/117b lim: 120 exec/s: 0 rss: 218Mb L: 116/116 MS: 5 ChangeBit-ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:09:23.265 [2024-07-23 08:22:35.607588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.607639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.265 NEW_FUNC[1/1]: 0x128bdb0 in _sock_flush /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1316 00:09:23.265 [2024-07-23 08:22:35.667672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.667705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.265 #11 NEW cov: 12140 ft: 13462 corp: 3/150b lim: 120 exec/s: 0 rss: 220Mb L: 33/116 MS: 2 ChangeByte-CrossOver- 00:09:23.265 [2024-07-23 08:22:35.728888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.728926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.265 [2024-07-23 08:22:35.729024] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744071823163391 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.265 [2024-07-23 08:22:35.729040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.524 [2024-07-23 08:22:35.798989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.799022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.524 [2024-07-23 08:22:35.799105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744071823163391 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.799124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.524 #13 NEW cov: 12152 ft: 14191 corp: 4/221b lim: 120 exec/s: 0 rss: 221Mb L: 71/116 MS: 1 InsertRepeatedBytes- 00:09:23.524 [2024-07-23 08:22:35.871581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.871618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.524 [2024-07-23 08:22:35.931686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.931719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.524 #15 NEW cov: 12238 ft: 14432 corp: 5/254b lim: 120 exec/s: 0 rss: 222Mb L: 33/116 MS: 1 ChangeBit- 00:09:23.524 [2024-07-23 08:22:35.999490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.999527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.524 [2024-07-23 08:22:35.999595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744071823163391 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.999617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.524 [2024-07-23 08:22:35.999693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.524 [2024-07-23 08:22:35.999714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.524 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:23.783 [2024-07-23 08:22:36.069505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.783 [2024-07-23 08:22:36.069544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.783 [2024-07-23 08:22:36.069619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744071823163391 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.069641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.069691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.069707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.784 #22 NEW cov: 12255 ft: 14977 corp: 6/332b lim: 120 exec/s: 0 rss: 224Mb L: 78/116 MS: 1 InsertRepeatedBytes- 00:09:23.784 [2024-07-23 08:22:36.137480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:177143808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.137515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.137578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.137597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.137654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.137674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.137758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.137779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.187733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:177143808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.187767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.187840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.187860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.187936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.187952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.188040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.188058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:23.784 #25 NEW cov: 12255 ft: 15135 corp: 7/441b lim: 120 exec/s: 0 rss: 226Mb L: 109/116 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:23.784 [2024-07-23 08:22:36.249328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.249359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.249431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.249449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.249528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.249545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:23.784 [2024-07-23 08:22:36.249630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644717731352463 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.784 [2024-07-23 08:22:36.249649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.319454] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.319489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.319575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.319595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.319675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.319694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.319773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644717731352463 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.319792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.043 #27 NEW cov: 12255 ft: 15163 corp: 8/557b lim: 120 exec/s: 27 rss: 227Mb L: 116/116 MS: 1 CrossOver- 00:09:24.043 [2024-07-23 08:22:36.391872] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13021231110367261876 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.391912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.391990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.392011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.392107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.392130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.441736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13021231110367261876 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.441776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.441852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.441872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.441941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.441959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.043 #32 NEW cov: 12255 ft: 15218 corp: 9/636b lim: 120 exec/s: 32 rss: 228Mb L: 79/116 MS: 4 ChangeBit-InsertByte-CopyPart-InsertRepeatedBytes- 00:09:24.043 [2024-07-23 08:22:36.514987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:177143808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.515031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.515107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.515133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.515220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.515243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.043 [2024-07-23 08:22:36.515334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.043 [2024-07-23 08:22:36.515356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.584836] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:177143808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.584874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.584945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.584965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.585042] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.585063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.585151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.585171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.303 #34 NEW cov: 12255 ft: 15263 corp: 10/745b lim: 120 exec/s: 34 rss: 230Mb L: 109/116 MS: 1 ShuffleBytes- 00:09:24.303 [2024-07-23 08:22:36.661561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.661598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.661671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.661688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.661741] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.661761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.661847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644717731352463 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.661866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.731783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.731817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.731887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.731906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.731979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.732000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.732080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644717731352463 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.732102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.303 #36 NEW cov: 12255 ft: 15292 corp: 11/861b lim: 120 exec/s: 36 rss: 232Mb L: 116/116 MS: 1 ChangeByte- 00:09:24.303 [2024-07-23 08:22:36.806563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.806597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.806670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.806690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.303 [2024-07-23 08:22:36.806773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.303 [2024-07-23 08:22:36.806789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.856703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.856737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.856800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.856825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.856891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.856910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.562 #38 NEW cov: 12255 ft: 15336 corp: 12/955b lim: 120 exec/s: 38 rss: 233Mb L: 94/116 MS: 1 EraseBytes- 00:09:24.562 [2024-07-23 08:22:36.914918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.914953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.915023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.915053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.915116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.915145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.915230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.915248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.965195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.965228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.965300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.965317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.965402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.965422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:36.965504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:36.965522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.562 #40 NEW cov: 12255 ft: 15412 corp: 13/1071b lim: 120 exec/s: 40 rss: 234Mb L: 116/116 MS: 1 ShuffleBytes- 00:09:24.562 [2024-07-23 08:22:37.036720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13021231110367261876 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:37.036755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.562 [2024-07-23 08:22:37.036826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.562 [2024-07-23 08:22:37.036847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.563 [2024-07-23 08:22:37.036922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.563 [2024-07-23 08:22:37.036944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.563 [2024-07-23 08:22:37.037037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.563 [2024-07-23 08:22:37.037055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.106929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13021231110367261876 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.106967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.107035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.107053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.107130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.107151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.107246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:13021231110853801140 len:46261 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.107268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.822 #42 NEW cov: 12255 ft: 15500 corp: 14/1170b lim: 120 exec/s: 42 rss: 235Mb L: 99/116 MS: 1 CopyPart- 00:09:24.822 [2024-07-23 08:22:37.171047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.171084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.171168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.171188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.171273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.171290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.171376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644717731352463 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.171394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.241189] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644713520992143 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.241223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.241302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:10344644715844964239 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.241323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.241383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744071830503423 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.241402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.241485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:10344644717731352463 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.241507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:24.822 #44 NEW cov: 12255 ft: 15544 corp: 15/1286b lim: 120 exec/s: 44 rss: 237Mb L: 116/116 MS: 1 ShuffleBytes- 00:09:24.822 [2024-07-23 08:22:37.311651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.311686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:24.822 [2024-07-23 08:22:37.311753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2408513536 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.822 [2024-07-23 08:22:37.311772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.081 [2024-07-23 08:22:37.361544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10344644714023946895 len:36752 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.081 [2024-07-23 08:22:37.361579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:25.081 [2024-07-23 08:22:37.361671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2408513536 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:25.081 [2024-07-23 08:22:37.361685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:25.081 #46 NEW cov: 12255 ft: 15580 corp: 16/1357b lim: 120 exec/s: 23 rss: 238Mb L: 71/116 MS: 1 ChangeBinInt- 00:09:25.081 #46 DONE cov: 12255 ft: 15580 corp: 16/1357b lim: 120 exec/s: 23 rss: 238Mb 00:09:25.081 Done 46 runs in 2 second(s) 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:25.340 08:22:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:09:25.599 [2024-07-23 08:22:37.873963] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:25.599 [2024-07-23 08:22:37.874067] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631333 ] 00:09:25.599 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.599 [2024-07-23 08:22:38.099652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.858 [2024-07-23 08:22:38.248806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.117 [2024-07-23 08:22:38.487031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.117 [2024-07-23 08:22:38.503299] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:09:26.117 INFO: Running with entropic power schedule (0xFF, 100). 00:09:26.117 INFO: Seed: 1053339465 00:09:26.117 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:26.117 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:26.117 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:26.117 INFO: A corpus is not provided, starting from an empty corpus 00:09:26.117 #2 INITED exec/s: 0 rss: 202Mb 00:09:26.117 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:26.117 This may also happen if the target rejected all inputs we tried so far 00:09:26.117 [2024-07-23 08:22:38.548710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.117 [2024-07-23 08:22:38.548750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.376 NEW_FUNC[1/698]: 0x56c490 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:09:26.376 NEW_FUNC[2/698]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:26.376 [2024-07-23 08:22:38.750836] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.376 [2024-07-23 08:22:38.750891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.376 #4 NEW cov: 12059 ft: 12030 corp: 2/26b lim: 100 exec/s: 0 rss: 218Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:09:26.376 [2024-07-23 08:22:38.827120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.376 [2024-07-23 08:22:38.827175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.376 [2024-07-23 08:22:38.886688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.376 [2024-07-23 08:22:38.886725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.634 #17 NEW cov: 12083 ft: 12560 corp: 3/52b lim: 100 exec/s: 0 rss: 219Mb L: 26/26 MS: 2 ShuffleBytes-CrossOver- 00:09:26.634 [2024-07-23 08:22:38.946105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.634 [2024-07-23 08:22:38.946160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.634 [2024-07-23 08:22:39.035627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.634 [2024-07-23 08:22:39.035663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.634 #19 NEW cov: 12095 ft: 12815 corp: 4/78b lim: 100 exec/s: 0 rss: 222Mb L: 26/26 MS: 1 InsertByte- 00:09:26.634 [2024-07-23 08:22:39.097083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.634 [2024-07-23 08:22:39.097141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.156643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.893 [2024-07-23 08:22:39.156681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.893 #23 NEW cov: 12181 ft: 13191 corp: 5/112b lim: 100 exec/s: 0 rss: 224Mb L: 34/34 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:09:26.893 [2024-07-23 08:22:39.218801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.893 [2024-07-23 08:22:39.218855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.218917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.893 [2024-07-23 08:22:39.218949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.219005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.893 [2024-07-23 08:22:39.219034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.893 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:26.893 [2024-07-23 08:22:39.287935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.893 [2024-07-23 08:22:39.287970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.288020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.893 [2024-07-23 08:22:39.288038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.288071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.893 [2024-07-23 08:22:39.288088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:26.893 #27 NEW cov: 12198 ft: 13670 corp: 6/177b lim: 100 exec/s: 0 rss: 225Mb L: 65/65 MS: 3 CMP-CopyPart-InsertRepeatedBytes- DE: "nvmf"- 00:09:26.893 [2024-07-23 08:22:39.349335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:26.893 [2024-07-23 08:22:39.349381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.349434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:26.893 [2024-07-23 08:22:39.349461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:26.893 [2024-07-23 08:22:39.349506] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:26.893 [2024-07-23 08:22:39.349534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.151 [2024-07-23 08:22:39.438962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.152 [2024-07-23 08:22:39.438997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.152 [2024-07-23 08:22:39.439047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.152 [2024-07-23 08:22:39.439065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.152 [2024-07-23 08:22:39.439111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.152 [2024-07-23 08:22:39.439129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.152 #29 NEW cov: 12198 ft: 13740 corp: 7/242b lim: 100 exec/s: 0 rss: 226Mb L: 65/65 MS: 1 ChangeBinInt- 00:09:27.152 [2024-07-23 08:22:39.500748] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.152 [2024-07-23 08:22:39.500791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.152 [2024-07-23 08:22:39.590557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.152 [2024-07-23 08:22:39.590593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.152 #31 NEW cov: 12198 ft: 13884 corp: 8/268b lim: 100 exec/s: 31 rss: 228Mb L: 26/65 MS: 1 ChangeBit- 00:09:27.152 [2024-07-23 08:22:39.653018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.152 [2024-07-23 08:22:39.653074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.410 [2024-07-23 08:22:39.712448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.410 [2024-07-23 08:22:39.712485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.410 #33 NEW cov: 12198 ft: 13964 corp: 9/294b lim: 100 exec/s: 33 rss: 229Mb L: 26/65 MS: 1 ChangeBit- 00:09:27.410 [2024-07-23 08:22:39.774223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.410 [2024-07-23 08:22:39.774275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.410 [2024-07-23 08:22:39.863647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.410 [2024-07-23 08:22:39.863684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.410 #40 NEW cov: 12198 ft: 14006 corp: 10/324b lim: 100 exec/s: 40 rss: 230Mb L: 30/65 MS: 1 PersAutoDict- DE: "nvmf"- 00:09:27.410 [2024-07-23 08:22:39.926127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.410 [2024-07-23 08:22:39.926180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.410 [2024-07-23 08:22:39.926238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.410 [2024-07-23 08:22:39.926268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.410 [2024-07-23 08:22:39.926333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.410 [2024-07-23 08:22:39.926361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.668 [2024-07-23 08:22:39.985376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.668 [2024-07-23 08:22:39.985411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.668 [2024-07-23 08:22:39.985447] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:27.668 [2024-07-23 08:22:39.985467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:27.668 [2024-07-23 08:22:39.985500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:27.668 [2024-07-23 08:22:39.985517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:27.668 #42 NEW cov: 12198 ft: 14065 corp: 11/389b lim: 100 exec/s: 42 rss: 232Mb L: 65/65 MS: 1 ChangeByte- 00:09:27.668 [2024-07-23 08:22:40.053952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.668 [2024-07-23 08:22:40.054008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.668 [2024-07-23 08:22:40.143150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.668 [2024-07-23 08:22:40.143193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.668 #44 NEW cov: 12198 ft: 14149 corp: 12/424b lim: 100 exec/s: 44 rss: 233Mb L: 35/65 MS: 1 InsertByte- 00:09:27.927 [2024-07-23 08:22:40.215270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.927 [2024-07-23 08:22:40.215310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.927 [2024-07-23 08:22:40.255039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.927 [2024-07-23 08:22:40.255070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.927 #46 NEW cov: 12198 ft: 14301 corp: 13/450b lim: 100 exec/s: 46 rss: 234Mb L: 26/65 MS: 1 ChangeByte- 00:09:27.927 [2024-07-23 08:22:40.326801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.927 [2024-07-23 08:22:40.326833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.927 [2024-07-23 08:22:40.386927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:27.927 [2024-07-23 08:22:40.386959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:27.927 #48 NEW cov: 12198 ft: 14341 corp: 14/486b lim: 100 exec/s: 48 rss: 235Mb L: 36/65 MS: 1 InsertByte- 00:09:28.186 [2024-07-23 08:22:40.460190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.186 [2024-07-23 08:22:40.460223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.186 [2024-07-23 08:22:40.520142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:28.186 [2024-07-23 08:22:40.520174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:28.186 #50 NEW cov: 12198 ft: 14516 corp: 15/516b lim: 100 exec/s: 25 rss: 237Mb L: 30/65 MS: 1 ChangeBinInt- 00:09:28.186 #50 DONE cov: 12198 ft: 14516 corp: 15/516b lim: 100 exec/s: 25 rss: 237Mb 00:09:28.186 ###### Recommended dictionary. ###### 00:09:28.186 "nvmf" # Uses: 2 00:09:28.186 ###### End of recommended dictionary. ###### 00:09:28.186 Done 50 runs in 2 second(s) 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:28.754 08:22:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:09:28.754 [2024-07-23 08:22:41.025268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:28.754 [2024-07-23 08:22:41.025373] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631894 ] 00:09:28.754 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.754 [2024-07-23 08:22:41.250880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.012 [2024-07-23 08:22:41.401217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.271 [2024-07-23 08:22:41.638696] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.271 [2024-07-23 08:22:41.654961] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:29.271 INFO: Running with entropic power schedule (0xFF, 100). 00:09:29.271 INFO: Seed: 4202332740 00:09:29.271 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:29.271 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:29.271 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:29.271 INFO: A corpus is not provided, starting from an empty corpus 00:09:29.271 #2 INITED exec/s: 0 rss: 201Mb 00:09:29.271 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:29.271 This may also happen if the target rejected all inputs we tried so far 00:09:29.271 [2024-07-23 08:22:41.713819] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792915009579191051 len:25868 00:09:29.271 [2024-07-23 08:22:41.713854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.528 NEW_FUNC[1/698]: 0x56fb40 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:29.528 NEW_FUNC[2/698]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:29.528 [2024-07-23 08:22:41.894498] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792915009579191051 len:25868 00:09:29.528 [2024-07-23 08:22:41.894570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.528 #10 NEW cov: 12037 ft: 11966 corp: 2/12b lim: 50 exec/s: 0 rss: 218Mb L: 11/11 MS: 2 InsertRepeatedBytes-CMP- DE: "\001\000\000e"- 00:09:29.528 [2024-07-23 08:22:41.968192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:795730858857925422 len:102 00:09:29.528 [2024-07-23 08:22:41.968240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.528 [2024-07-23 08:22:42.027950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:795730858857925422 len:102 00:09:29.528 [2024-07-23 08:22:42.027983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.786 #17 NEW cov: 12061 ft: 12356 corp: 3/24b lim: 50 exec/s: 0 rss: 220Mb L: 12/12 MS: 1 InsertByte- 00:09:29.786 [2024-07-23 08:22:42.082173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560363859 len:21332 00:09:29.786 [2024-07-23 08:22:42.082211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.786 [2024-07-23 08:22:42.082271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6004234345560363859 len:21332 00:09:29.786 [2024-07-23 08:22:42.082290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.786 [2024-07-23 08:22:42.121600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560363859 len:21332 00:09:29.786 [2024-07-23 08:22:42.121632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.786 [2024-07-23 08:22:42.121695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6004234345560363859 len:21332 00:09:29.786 [2024-07-23 08:22:42.121713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:29.786 #22 NEW cov: 12073 ft: 13008 corp: 4/51b lim: 50 exec/s: 0 rss: 221Mb L: 27/27 MS: 4 InsertByte-ChangeBinInt-CopyPart-InsertRepeatedBytes- 00:09:29.786 [2024-07-23 08:22:42.187257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560363859 len:21332 00:09:29.786 [2024-07-23 08:22:42.187292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.786 [2024-07-23 08:22:42.247406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560363859 len:21332 00:09:29.786 [2024-07-23 08:22:42.247438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:29.786 #24 NEW cov: 12159 ft: 13530 corp: 5/65b lim: 50 exec/s: 0 rss: 223Mb L: 14/27 MS: 1 EraseBytes- 00:09:30.043 [2024-07-23 08:22:42.307560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004333709628756819 len:21332 00:09:30.043 [2024-07-23 08:22:42.307597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.043 [2024-07-23 08:22:42.367307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004333709628756819 len:21332 00:09:30.044 [2024-07-23 08:22:42.367343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.044 #26 NEW cov: 12159 ft: 13735 corp: 6/79b lim: 50 exec/s: 0 rss: 225Mb L: 14/27 MS: 1 ChangeBinInt- 00:09:30.044 [2024-07-23 08:22:42.438463] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560354899 len:21332 00:09:30.044 [2024-07-23 08:22:42.438495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.044 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:30.044 [2024-07-23 08:22:42.478579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560354899 len:21332 00:09:30.044 [2024-07-23 08:22:42.478610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.044 #28 NEW cov: 12176 ft: 13915 corp: 7/93b lim: 50 exec/s: 0 rss: 226Mb L: 14/27 MS: 1 ChangeByte- 00:09:30.044 [2024-07-23 08:22:42.539234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792915009579191051 len:25903 00:09:30.044 [2024-07-23 08:22:42.539273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.302 [2024-07-23 08:22:42.579016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792915009579191051 len:25903 00:09:30.302 [2024-07-23 08:22:42.579047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.302 #30 NEW cov: 12176 ft: 13963 corp: 8/105b lim: 50 exec/s: 0 rss: 227Mb L: 12/27 MS: 1 InsertByte- 00:09:30.302 [2024-07-23 08:22:42.650068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004194071652029267 len:21332 00:09:30.302 [2024-07-23 08:22:42.650109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.302 [2024-07-23 08:22:42.700049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004194071652029267 len:21332 00:09:30.302 [2024-07-23 08:22:42.700078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.302 #32 NEW cov: 12176 ft: 14006 corp: 9/119b lim: 50 exec/s: 32 rss: 229Mb L: 14/27 MS: 1 ChangeByte- 00:09:30.302 [2024-07-23 08:22:42.771806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234346214675248 len:21332 00:09:30.302 [2024-07-23 08:22:42.771841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.559 [2024-07-23 08:22:42.831829] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234346214675248 len:21332 00:09:30.559 [2024-07-23 08:22:42.831860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.559 #34 NEW cov: 12176 ft: 14030 corp: 10/134b lim: 50 exec/s: 34 rss: 230Mb L: 15/27 MS: 1 InsertByte- 00:09:30.559 [2024-07-23 08:22:42.903554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:816087573616855819 len:44467 00:09:30.559 [2024-07-23 08:22:42.903587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.559 [2024-07-23 08:22:42.963559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:816087573616855819 len:44467 00:09:30.559 [2024-07-23 08:22:42.963590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.559 #36 NEW cov: 12176 ft: 14141 corp: 11/146b lim: 50 exec/s: 36 rss: 232Mb L: 12/27 MS: 1 CrossOver- 00:09:30.559 [2024-07-23 08:22:43.025475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:23454041066656560 len:21332 00:09:30.559 [2024-07-23 08:22:43.025516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.817 [2024-07-23 08:22:43.085085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:23454041066656560 len:21332 00:09:30.817 [2024-07-23 08:22:43.085124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.817 #38 NEW cov: 12176 ft: 14208 corp: 12/161b lim: 50 exec/s: 38 rss: 233Mb L: 15/27 MS: 1 ChangeByte- 00:09:30.817 [2024-07-23 08:22:43.147133] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004194071652029267 len:257 00:09:30.817 [2024-07-23 08:22:43.147171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.817 [2024-07-23 08:22:43.206690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004194071652029267 len:257 00:09:30.817 [2024-07-23 08:22:43.206723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.817 #40 NEW cov: 12176 ft: 14272 corp: 13/175b lim: 50 exec/s: 40 rss: 234Mb L: 14/27 MS: 1 PersAutoDict- DE: "\001\000\000e"- 00:09:30.817 [2024-07-23 08:22:43.268235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:23454041066656560 len:21332 00:09:30.817 [2024-07-23 08:22:43.268272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.817 [2024-07-23 08:22:43.268331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10923366097404270487 len:38808 00:09:30.817 [2024-07-23 08:22:43.268352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.817 [2024-07-23 08:22:43.268420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10923366098549577623 len:38808 00:09:30.817 [2024-07-23 08:22:43.268439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:30.818 [2024-07-23 08:22:43.327935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:23454041066656560 len:21332 00:09:30.818 [2024-07-23 08:22:43.327967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:30.818 [2024-07-23 08:22:43.328004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10923366097404270487 len:38808 00:09:30.818 [2024-07-23 08:22:43.328020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:30.818 [2024-07-23 08:22:43.328071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10923366098549577623 len:38808 00:09:30.818 [2024-07-23 08:22:43.328087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:31.075 #42 NEW cov: 12176 ft: 14571 corp: 14/209b lim: 50 exec/s: 42 rss: 236Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:09:31.075 [2024-07-23 08:22:43.389735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:795730858857925422 len:102 00:09:31.075 [2024-07-23 08:22:43.389774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.075 [2024-07-23 08:22:43.389837] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23454041066656560 len:21332 00:09:31.075 [2024-07-23 08:22:43.389857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.075 [2024-07-23 08:22:43.449249] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:795730858857925422 len:102 00:09:31.075 [2024-07-23 08:22:43.449280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.075 [2024-07-23 08:22:43.449356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23454041066656560 len:21332 00:09:31.075 [2024-07-23 08:22:43.449373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.075 #44 NEW cov: 12176 ft: 14595 corp: 15/236b lim: 50 exec/s: 44 rss: 237Mb L: 27/34 MS: 1 CrossOver- 00:09:31.075 [2024-07-23 08:22:43.520909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:795730858857925422 len:102 00:09:31.075 [2024-07-23 08:22:43.520945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.075 [2024-07-23 08:22:43.520989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23454041066656560 len:29524 00:09:31.075 [2024-07-23 08:22:43.521006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.075 [2024-07-23 08:22:43.580891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:795730858857925422 len:102 00:09:31.075 [2024-07-23 08:22:43.580921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.075 [2024-07-23 08:22:43.580964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:23454041066656560 len:29524 00:09:31.075 [2024-07-23 08:22:43.580980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.334 #46 NEW cov: 12176 ft: 14631 corp: 16/263b lim: 50 exec/s: 46 rss: 238Mb L: 27/34 MS: 1 ChangeBit- 00:09:31.334 [2024-07-23 08:22:43.652616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560363859 len:21295 00:09:31.334 [2024-07-23 08:22:43.652648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.334 [2024-07-23 08:22:43.652692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6655285968285946707 len:21295 00:09:31.334 [2024-07-23 08:22:43.652709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.334 [2024-07-23 08:22:43.692539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:6004234345560363859 len:21295 00:09:31.334 [2024-07-23 08:22:43.692570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:31.334 [2024-07-23 08:22:43.692623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6655285968285946707 len:21295 00:09:31.334 [2024-07-23 08:22:43.692650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:31.334 #48 NEW cov: 12176 ft: 14645 corp: 17/290b lim: 50 exec/s: 24 rss: 240Mb L: 27/34 MS: 1 CopyPart- 00:09:31.334 #48 DONE cov: 12176 ft: 14645 corp: 17/290b lim: 50 exec/s: 24 rss: 240Mb 00:09:31.334 ###### Recommended dictionary. ###### 00:09:31.334 "\001\000\000e" # Uses: 1 00:09:31.334 ###### End of recommended dictionary. ###### 00:09:31.334 Done 48 runs in 2 second(s) 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:31.902 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:31.903 08:22:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:09:31.903 [2024-07-23 08:22:44.194246] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:31.903 [2024-07-23 08:22:44.194333] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632357 ] 00:09:31.903 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.903 [2024-07-23 08:22:44.415126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.161 [2024-07-23 08:22:44.567551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.420 [2024-07-23 08:22:44.810518] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.420 [2024-07-23 08:22:44.826799] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:32.420 INFO: Running with entropic power schedule (0xFF, 100). 00:09:32.420 INFO: Seed: 3081436953 00:09:32.420 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:32.420 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:32.420 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:32.420 INFO: A corpus is not provided, starting from an empty corpus 00:09:32.420 #2 INITED exec/s: 0 rss: 202Mb 00:09:32.420 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:32.420 This may also happen if the target rejected all inputs we tried so far 00:09:32.420 [2024-07-23 08:22:44.882578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.420 [2024-07-23 08:22:44.882614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.420 [2024-07-23 08:22:44.882685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.420 [2024-07-23 08:22:44.882703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.679 NEW_FUNC[1/700]: 0x571ab0 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:32.679 NEW_FUNC[2/700]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:32.679 [2024-07-23 08:22:45.053174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.679 [2024-07-23 08:22:45.053236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.679 [2024-07-23 08:22:45.053316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.679 [2024-07-23 08:22:45.053342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.679 #31 NEW cov: 12095 ft: 12096 corp: 2/41b lim: 90 exec/s: 0 rss: 218Mb L: 40/40 MS: 3 InsertByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:09:32.679 [2024-07-23 08:22:45.100564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.679 [2024-07-23 08:22:45.100625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.679 [2024-07-23 08:22:45.100726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.679 [2024-07-23 08:22:45.100756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.679 [2024-07-23 08:22:45.160483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.679 [2024-07-23 08:22:45.160516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.679 [2024-07-23 08:22:45.160573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:32.679 [2024-07-23 08:22:45.160589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:32.679 #33 NEW cov: 12119 ft: 12469 corp: 3/81b lim: 90 exec/s: 0 rss: 220Mb L: 40/40 MS: 1 ChangeByte- 00:09:32.938 [2024-07-23 08:22:45.229340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.938 [2024-07-23 08:22:45.229374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.938 [2024-07-23 08:22:45.279274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.938 [2024-07-23 08:22:45.279306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.938 #37 NEW cov: 12131 ft: 13772 corp: 4/108b lim: 90 exec/s: 0 rss: 222Mb L: 27/40 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:09:32.938 [2024-07-23 08:22:45.348042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.938 [2024-07-23 08:22:45.348077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.938 [2024-07-23 08:22:45.408161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:32.938 [2024-07-23 08:22:45.408191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:32.938 #44 NEW cov: 12217 ft: 14127 corp: 5/135b lim: 90 exec/s: 0 rss: 224Mb L: 27/40 MS: 1 CMP- DE: "noti"- 00:09:33.197 [2024-07-23 08:22:45.469227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.197 [2024-07-23 08:22:45.469262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.197 [2024-07-23 08:22:45.469314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.197 [2024-07-23 08:22:45.469335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.197 [2024-07-23 08:22:45.508930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.197 [2024-07-23 08:22:45.508959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.197 [2024-07-23 08:22:45.508998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.197 [2024-07-23 08:22:45.509015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.197 #46 NEW cov: 12217 ft: 14249 corp: 6/175b lim: 90 exec/s: 0 rss: 225Mb L: 40/40 MS: 1 ChangeByte- 00:09:33.197 [2024-07-23 08:22:45.569673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.197 [2024-07-23 08:22:45.569713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.197 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:33.197 [2024-07-23 08:22:45.609443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.197 [2024-07-23 08:22:45.609473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.197 #51 NEW cov: 12234 ft: 14362 corp: 7/195b lim: 90 exec/s: 0 rss: 226Mb L: 20/40 MS: 4 EraseBytes-InsertByte-ShuffleBytes-PersAutoDict- DE: "noti"- 00:09:33.197 [2024-07-23 08:22:45.670446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.197 [2024-07-23 08:22:45.670482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.197 [2024-07-23 08:22:45.670541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.197 [2024-07-23 08:22:45.670561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.456 [2024-07-23 08:22:45.730174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.456 [2024-07-23 08:22:45.730202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.456 [2024-07-23 08:22:45.730245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.456 [2024-07-23 08:22:45.730262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.456 #53 NEW cov: 12234 ft: 14386 corp: 8/235b lim: 90 exec/s: 0 rss: 228Mb L: 40/40 MS: 1 ChangeBit- 00:09:33.456 [2024-07-23 08:22:45.801791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.456 [2024-07-23 08:22:45.801822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.456 [2024-07-23 08:22:45.851775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.456 [2024-07-23 08:22:45.851806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.456 #55 NEW cov: 12234 ft: 14474 corp: 9/261b lim: 90 exec/s: 55 rss: 229Mb L: 26/40 MS: 1 CrossOver- 00:09:33.456 [2024-07-23 08:22:45.923610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.456 [2024-07-23 08:22:45.923641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.456 [2024-07-23 08:22:45.923686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.456 [2024-07-23 08:22:45.923706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.456 [2024-07-23 08:22:45.963570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.456 [2024-07-23 08:22:45.963601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.456 [2024-07-23 08:22:45.963642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.456 [2024-07-23 08:22:45.963658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.714 #57 NEW cov: 12234 ft: 14547 corp: 10/301b lim: 90 exec/s: 57 rss: 230Mb L: 40/40 MS: 1 CrossOver- 00:09:33.714 [2024-07-23 08:22:46.024671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.714 [2024-07-23 08:22:46.024705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.714 [2024-07-23 08:22:46.024757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.714 [2024-07-23 08:22:46.024776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.714 [2024-07-23 08:22:46.084419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.714 [2024-07-23 08:22:46.084449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.714 [2024-07-23 08:22:46.084508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.714 [2024-07-23 08:22:46.084524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.714 #59 NEW cov: 12234 ft: 14639 corp: 11/341b lim: 90 exec/s: 59 rss: 232Mb L: 40/40 MS: 1 ChangeBit- 00:09:33.714 [2024-07-23 08:22:46.156082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.714 [2024-07-23 08:22:46.156124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.714 [2024-07-23 08:22:46.216111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.714 [2024-07-23 08:22:46.216143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.973 #61 NEW cov: 12234 ft: 14644 corp: 12/367b lim: 90 exec/s: 61 rss: 234Mb L: 26/40 MS: 1 ChangeByte- 00:09:33.973 [2024-07-23 08:22:46.287682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.973 [2024-07-23 08:22:46.287715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.973 [2024-07-23 08:22:46.327654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.973 [2024-07-23 08:22:46.327684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.973 #63 NEW cov: 12234 ft: 14673 corp: 13/394b lim: 90 exec/s: 63 rss: 235Mb L: 27/40 MS: 1 PersAutoDict- DE: "noti"- 00:09:33.973 [2024-07-23 08:22:46.389490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.973 [2024-07-23 08:22:46.389525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.973 [2024-07-23 08:22:46.389579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.973 [2024-07-23 08:22:46.389600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.973 [2024-07-23 08:22:46.449149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:33.973 [2024-07-23 08:22:46.449179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:33.973 [2024-07-23 08:22:46.449237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:33.973 [2024-07-23 08:22:46.449255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:33.973 #65 NEW cov: 12234 ft: 14705 corp: 14/434b lim: 90 exec/s: 65 rss: 237Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:34.232 [2024-07-23 08:22:46.520719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.232 [2024-07-23 08:22:46.520751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.232 [2024-07-23 08:22:46.520800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.232 [2024-07-23 08:22:46.520817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.232 [2024-07-23 08:22:46.580749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.232 [2024-07-23 08:22:46.580779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.232 [2024-07-23 08:22:46.580822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.232 [2024-07-23 08:22:46.580838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.232 #67 NEW cov: 12234 ft: 14756 corp: 15/474b lim: 90 exec/s: 67 rss: 238Mb L: 40/40 MS: 1 CMP- DE: "spdk_nvme_driver"- 00:09:34.232 [2024-07-23 08:22:46.641923] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.232 [2024-07-23 08:22:46.641955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.232 [2024-07-23 08:22:46.681606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.232 [2024-07-23 08:22:46.681641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.232 #69 NEW cov: 12234 ft: 14860 corp: 16/501b lim: 90 exec/s: 69 rss: 239Mb L: 27/40 MS: 1 ChangeByte- 00:09:34.232 [2024-07-23 08:22:46.732005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.232 [2024-07-23 08:22:46.732040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.491 [2024-07-23 08:22:46.771946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.491 [2024-07-23 08:22:46.771977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.491 #71 NEW cov: 12234 ft: 14885 corp: 17/528b lim: 90 exec/s: 71 rss: 240Mb L: 27/40 MS: 1 EraseBytes- 00:09:34.491 [2024-07-23 08:22:46.844088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.491 [2024-07-23 08:22:46.844125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.491 [2024-07-23 08:22:46.844183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.491 [2024-07-23 08:22:46.844202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.491 [2024-07-23 08:22:46.884004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:34.491 [2024-07-23 08:22:46.884034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.491 [2024-07-23 08:22:46.884077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:34.491 [2024-07-23 08:22:46.884095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.491 #73 NEW cov: 12234 ft: 14902 corp: 18/568b lim: 90 exec/s: 36 rss: 241Mb L: 40/40 MS: 1 ChangeBit- 00:09:34.491 #73 DONE cov: 12234 ft: 14902 corp: 18/568b lim: 90 exec/s: 36 rss: 241Mb 00:09:34.491 ###### Recommended dictionary. ###### 00:09:34.491 "noti" # Uses: 2 00:09:34.491 "spdk_nvme_driver" # Uses: 0 00:09:34.491 ###### End of recommended dictionary. ###### 00:09:34.491 Done 73 runs in 2 second(s) 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:35.059 08:22:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:09:35.059 [2024-07-23 08:22:47.387017] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:35.059 [2024-07-23 08:22:47.387160] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632910 ] 00:09:35.059 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.318 [2024-07-23 08:22:47.612767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.318 [2024-07-23 08:22:47.760723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.576 [2024-07-23 08:22:47.997826] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.576 [2024-07-23 08:22:48.014079] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:35.576 INFO: Running with entropic power schedule (0xFF, 100). 00:09:35.576 INFO: Seed: 1971415622 00:09:35.576 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:35.576 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:35.576 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:35.576 INFO: A corpus is not provided, starting from an empty corpus 00:09:35.576 #2 INITED exec/s: 0 rss: 201Mb 00:09:35.576 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:35.576 This may also happen if the target rejected all inputs we tried so far 00:09:35.576 [2024-07-23 08:22:48.073203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.576 [2024-07-23 08:22:48.073243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.576 [2024-07-23 08:22:48.073309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.576 [2024-07-23 08:22:48.073325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.577 [2024-07-23 08:22:48.073373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.577 [2024-07-23 08:22:48.073389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.835 NEW_FUNC[1/700]: 0x5753a0 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:35.835 NEW_FUNC[2/700]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:35.835 [2024-07-23 08:22:48.253857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.835 [2024-07-23 08:22:48.253914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.835 [2024-07-23 08:22:48.253987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.835 [2024-07-23 08:22:48.254009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.835 [2024-07-23 08:22:48.254074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.835 [2024-07-23 08:22:48.254094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.835 #7 NEW cov: 12070 ft: 12042 corp: 2/36b lim: 50 exec/s: 0 rss: 219Mb L: 35/35 MS: 4 ChangeBit-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:09:35.835 [2024-07-23 08:22:48.330167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:35.835 [2024-07-23 08:22:48.330213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.835 [2024-07-23 08:22:48.330271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:35.835 [2024-07-23 08:22:48.330294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.835 [2024-07-23 08:22:48.330359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:35.835 [2024-07-23 08:22:48.330380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.389834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.094 [2024-07-23 08:22:48.389866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.389912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.094 [2024-07-23 08:22:48.389931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.389987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.094 [2024-07-23 08:22:48.390002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.094 #9 NEW cov: 12094 ft: 12359 corp: 3/71b lim: 50 exec/s: 0 rss: 221Mb L: 35/35 MS: 1 CMP- DE: "\002\000"- 00:09:36.094 [2024-07-23 08:22:48.448916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.094 [2024-07-23 08:22:48.448951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.449011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.094 [2024-07-23 08:22:48.449028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.449084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.094 [2024-07-23 08:22:48.449105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.488457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.094 [2024-07-23 08:22:48.488488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.488535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.094 [2024-07-23 08:22:48.488551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.488605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.094 [2024-07-23 08:22:48.488620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.094 #11 NEW cov: 12106 ft: 13037 corp: 4/106b lim: 50 exec/s: 0 rss: 222Mb L: 35/35 MS: 1 ChangeBit- 00:09:36.094 [2024-07-23 08:22:48.548858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.094 [2024-07-23 08:22:48.548893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.094 [2024-07-23 08:22:48.598693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.094 [2024-07-23 08:22:48.598724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.352 #15 NEW cov: 12192 ft: 14118 corp: 5/121b lim: 50 exec/s: 0 rss: 223Mb L: 15/35 MS: 3 ChangeByte-ShuffleBytes-CrossOver- 00:09:36.352 [2024-07-23 08:22:48.670299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.352 [2024-07-23 08:22:48.670332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.352 [2024-07-23 08:22:48.670400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.352 [2024-07-23 08:22:48.670418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.352 [2024-07-23 08:22:48.670473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.353 [2024-07-23 08:22:48.670488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.710226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.353 [2024-07-23 08:22:48.710256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.710300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.353 [2024-07-23 08:22:48.710316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.710369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.353 [2024-07-23 08:22:48.710385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.353 #17 NEW cov: 12192 ft: 14259 corp: 6/156b lim: 50 exec/s: 0 rss: 225Mb L: 35/35 MS: 1 CrossOver- 00:09:36.353 [2024-07-23 08:22:48.771087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.353 [2024-07-23 08:22:48.771128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.771180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.353 [2024-07-23 08:22:48.771200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.771258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.353 [2024-07-23 08:22:48.771275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.353 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:36.353 [2024-07-23 08:22:48.830946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.353 [2024-07-23 08:22:48.830977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.831020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.353 [2024-07-23 08:22:48.831036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.353 [2024-07-23 08:22:48.831088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.353 [2024-07-23 08:22:48.831110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.353 #19 NEW cov: 12209 ft: 14374 corp: 7/187b lim: 50 exec/s: 0 rss: 226Mb L: 31/35 MS: 1 CrossOver- 00:09:36.611 [2024-07-23 08:22:48.892539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.611 [2024-07-23 08:22:48.892582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.892636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.611 [2024-07-23 08:22:48.892660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.892727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.611 [2024-07-23 08:22:48.892748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.932181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.611 [2024-07-23 08:22:48.932212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.932270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.611 [2024-07-23 08:22:48.932286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.932340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.611 [2024-07-23 08:22:48.932356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.611 #21 NEW cov: 12209 ft: 14472 corp: 8/222b lim: 50 exec/s: 0 rss: 228Mb L: 35/35 MS: 1 ChangeByte- 00:09:36.611 [2024-07-23 08:22:48.993285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.611 [2024-07-23 08:22:48.993320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.993366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.611 [2024-07-23 08:22:48.993384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:48.993444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.611 [2024-07-23 08:22:48.993461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:49.053311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.611 [2024-07-23 08:22:49.053342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:49.053411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.611 [2024-07-23 08:22:49.053428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:49.053484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.611 [2024-07-23 08:22:49.053500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.611 #23 NEW cov: 12209 ft: 14528 corp: 9/255b lim: 50 exec/s: 23 rss: 229Mb L: 33/35 MS: 1 CopyPart- 00:09:36.611 [2024-07-23 08:22:49.114728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.611 [2024-07-23 08:22:49.114762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:49.114827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.611 [2024-07-23 08:22:49.114844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.611 [2024-07-23 08:22:49.114902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.611 [2024-07-23 08:22:49.114919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.869 [2024-07-23 08:22:49.154383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.869 [2024-07-23 08:22:49.154412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.869 [2024-07-23 08:22:49.154473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.869 [2024-07-23 08:22:49.154489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.869 [2024-07-23 08:22:49.154556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.869 [2024-07-23 08:22:49.154577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.869 #25 NEW cov: 12209 ft: 14668 corp: 10/286b lim: 50 exec/s: 25 rss: 230Mb L: 31/35 MS: 1 ChangeBit- 00:09:36.869 [2024-07-23 08:22:49.215978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.869 [2024-07-23 08:22:49.216013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.869 [2024-07-23 08:22:49.216073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.869 [2024-07-23 08:22:49.216088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.216154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.870 [2024-07-23 08:22:49.216172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.216234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:36.870 [2024-07-23 08:22:49.216251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.255595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.870 [2024-07-23 08:22:49.255626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.255678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.870 [2024-07-23 08:22:49.255695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.255747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.870 [2024-07-23 08:22:49.255763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.255817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:09:36.870 [2024-07-23 08:22:49.255833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.870 #27 NEW cov: 12209 ft: 15027 corp: 11/328b lim: 50 exec/s: 27 rss: 232Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:09:36.870 [2024-07-23 08:22:49.316707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.870 [2024-07-23 08:22:49.316741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.316788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.870 [2024-07-23 08:22:49.316808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.316866] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.870 [2024-07-23 08:22:49.316883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.376745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:36.870 [2024-07-23 08:22:49.376776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.376839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:36.870 [2024-07-23 08:22:49.376859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.870 [2024-07-23 08:22:49.376915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:36.870 [2024-07-23 08:22:49.376929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.128 #29 NEW cov: 12209 ft: 15069 corp: 12/360b lim: 50 exec/s: 29 rss: 233Mb L: 32/42 MS: 1 EraseBytes- 00:09:37.128 [2024-07-23 08:22:49.447862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.128 [2024-07-23 08:22:49.447895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.447938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.128 [2024-07-23 08:22:49.447955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.448014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.128 [2024-07-23 08:22:49.448030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.507962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.128 [2024-07-23 08:22:49.507991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.508041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.128 [2024-07-23 08:22:49.508058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.508125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.128 [2024-07-23 08:22:49.508141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.128 #31 NEW cov: 12209 ft: 15139 corp: 13/395b lim: 50 exec/s: 31 rss: 234Mb L: 35/42 MS: 1 ChangeBinInt- 00:09:37.128 [2024-07-23 08:22:49.569651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.128 [2024-07-23 08:22:49.569689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.569753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.128 [2024-07-23 08:22:49.569772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.569833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.128 [2024-07-23 08:22:49.569851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.609273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.128 [2024-07-23 08:22:49.609303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.609346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.128 [2024-07-23 08:22:49.609362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.128 [2024-07-23 08:22:49.609416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.128 [2024-07-23 08:22:49.609435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.386 #33 NEW cov: 12209 ft: 15231 corp: 14/430b lim: 50 exec/s: 33 rss: 235Mb L: 35/42 MS: 1 CrossOver- 00:09:37.386 [2024-07-23 08:22:49.670986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.386 [2024-07-23 08:22:49.671023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.386 [2024-07-23 08:22:49.671081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.387 [2024-07-23 08:22:49.671106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.671168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.387 [2024-07-23 08:22:49.671186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.730587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.387 [2024-07-23 08:22:49.730618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.730667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.387 [2024-07-23 08:22:49.730684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.730733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.387 [2024-07-23 08:22:49.730765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.387 #35 NEW cov: 12209 ft: 15255 corp: 15/465b lim: 50 exec/s: 35 rss: 237Mb L: 35/42 MS: 1 PersAutoDict- DE: "\002\000"- 00:09:37.387 [2024-07-23 08:22:49.791881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.387 [2024-07-23 08:22:49.791924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.791995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.387 [2024-07-23 08:22:49.792018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.792089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.387 [2024-07-23 08:22:49.792119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.831684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.387 [2024-07-23 08:22:49.831713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.831761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:37.387 [2024-07-23 08:22:49.831777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.387 [2024-07-23 08:22:49.831831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:37.387 [2024-07-23 08:22:49.831847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.387 #37 NEW cov: 12209 ft: 15340 corp: 16/496b lim: 50 exec/s: 37 rss: 238Mb L: 31/42 MS: 1 ChangeBit- 00:09:37.387 [2024-07-23 08:22:49.892685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.387 [2024-07-23 08:22:49.892732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.645 [2024-07-23 08:22:49.942340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.645 [2024-07-23 08:22:49.942373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.645 #39 NEW cov: 12209 ft: 15370 corp: 17/515b lim: 50 exec/s: 39 rss: 239Mb L: 19/42 MS: 1 EraseBytes- 00:09:37.645 [2024-07-23 08:22:50.004035] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.645 [2024-07-23 08:22:50.004073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.645 [2024-07-23 08:22:50.063794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:37.645 [2024-07-23 08:22:50.063837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.645 #41 NEW cov: 12209 ft: 15391 corp: 18/534b lim: 50 exec/s: 20 rss: 241Mb L: 19/42 MS: 1 ChangeBit- 00:09:37.645 #41 DONE cov: 12209 ft: 15391 corp: 18/534b lim: 50 exec/s: 20 rss: 241Mb 00:09:37.645 ###### Recommended dictionary. ###### 00:09:37.645 "\002\000" # Uses: 1 00:09:37.645 ###### End of recommended dictionary. ###### 00:09:37.645 Done 41 runs in 2 second(s) 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:38.214 08:22:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:09:38.214 [2024-07-23 08:22:50.567539] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:38.214 [2024-07-23 08:22:50.567628] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633465 ] 00:09:38.214 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.474 [2024-07-23 08:22:50.793355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.474 [2024-07-23 08:22:50.941584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.733 [2024-07-23 08:22:51.178577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.733 [2024-07-23 08:22:51.194833] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:38.733 INFO: Running with entropic power schedule (0xFF, 100). 00:09:38.733 INFO: Seed: 859483064 00:09:38.733 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:38.733 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:38.733 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:38.733 INFO: A corpus is not provided, starting from an empty corpus 00:09:38.733 #2 INITED exec/s: 0 rss: 202Mb 00:09:38.733 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:38.733 This may also happen if the target rejected all inputs we tried so far 00:09:38.733 [2024-07-23 08:22:51.250801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.733 [2024-07-23 08:22:51.250845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.733 [2024-07-23 08:22:51.250916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.733 [2024-07-23 08:22:51.250940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.994 NEW_FUNC[1/700]: 0x577af0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:38.994 NEW_FUNC[2/700]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:38.994 [2024-07-23 08:22:51.421525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:38.994 [2024-07-23 08:22:51.421593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.994 [2024-07-23 08:22:51.421673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:38.994 [2024-07-23 08:22:51.421701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.288 #4 NEW cov: 12066 ft: 12028 corp: 2/50b lim: 85 exec/s: 0 rss: 218Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:09:39.288 [2024-07-23 08:22:51.557476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.288 [2024-07-23 08:22:51.557524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.288 [2024-07-23 08:22:51.557596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.288 [2024-07-23 08:22:51.557615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.288 [2024-07-23 08:22:51.617544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.288 [2024-07-23 08:22:51.617577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.288 [2024-07-23 08:22:51.617631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.288 [2024-07-23 08:22:51.617647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.288 #6 NEW cov: 12120 ft: 12398 corp: 3/100b lim: 85 exec/s: 0 rss: 220Mb L: 50/50 MS: 1 InsertByte- 00:09:39.288 [2024-07-23 08:22:51.689773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.288 [2024-07-23 08:22:51.689807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.288 [2024-07-23 08:22:51.689856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.288 [2024-07-23 08:22:51.689873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.288 [2024-07-23 08:22:51.729747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.288 [2024-07-23 08:22:51.729778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.288 [2024-07-23 08:22:51.729830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.288 [2024-07-23 08:22:51.729846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.288 #8 NEW cov: 12132 ft: 12995 corp: 4/149b lim: 85 exec/s: 0 rss: 221Mb L: 49/50 MS: 1 ChangeBit- 00:09:39.576 [2024-07-23 08:22:51.791186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.576 [2024-07-23 08:22:51.791223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.791272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.576 [2024-07-23 08:22:51.791291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.850730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.576 [2024-07-23 08:22:51.850764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.850808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.576 [2024-07-23 08:22:51.850824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.576 #10 NEW cov: 12218 ft: 13397 corp: 5/186b lim: 85 exec/s: 0 rss: 224Mb L: 37/50 MS: 1 EraseBytes- 00:09:39.576 [2024-07-23 08:22:51.913290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.576 [2024-07-23 08:22:51.913326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.913385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.576 [2024-07-23 08:22:51.913400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.913460] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.576 [2024-07-23 08:22:51.913478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.913539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.576 [2024-07-23 08:22:51.913557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.952813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.576 [2024-07-23 08:22:51.952846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.952895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.576 [2024-07-23 08:22:51.952915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.952967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:39.576 [2024-07-23 08:22:51.952982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:51.953033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:39.576 [2024-07-23 08:22:51.953048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.576 #12 NEW cov: 12218 ft: 13953 corp: 6/255b lim: 85 exec/s: 0 rss: 225Mb L: 69/69 MS: 1 InsertRepeatedBytes- 00:09:39.576 [2024-07-23 08:22:52.013594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.576 [2024-07-23 08:22:52.013628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:52.013688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.576 [2024-07-23 08:22:52.013710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:52.073307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.576 [2024-07-23 08:22:52.073337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.576 [2024-07-23 08:22:52.073382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.576 [2024-07-23 08:22:52.073399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.841 #14 NEW cov: 12218 ft: 14009 corp: 7/305b lim: 85 exec/s: 0 rss: 226Mb L: 50/69 MS: 1 ChangeBit- 00:09:39.842 [2024-07-23 08:22:52.129945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.842 [2024-07-23 08:22:52.129998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.842 [2024-07-23 08:22:52.130080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.842 [2024-07-23 08:22:52.130113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.842 [2024-07-23 08:22:52.169331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.842 [2024-07-23 08:22:52.169361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.842 [2024-07-23 08:22:52.169418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.842 [2024-07-23 08:22:52.169436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.842 #16 NEW cov: 12218 ft: 14028 corp: 8/354b lim: 85 exec/s: 0 rss: 228Mb L: 49/69 MS: 1 CMP- DE: "\001\000\177\356\327\014\2008"- 00:09:39.842 [2024-07-23 08:22:52.225846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.842 [2024-07-23 08:22:52.225890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.842 [2024-07-23 08:22:52.225960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.842 [2024-07-23 08:22:52.225983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.842 [2024-07-23 08:22:52.265449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.842 [2024-07-23 08:22:52.265479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.842 [2024-07-23 08:22:52.265531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:39.842 [2024-07-23 08:22:52.265548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.842 #18 NEW cov: 12218 ft: 14067 corp: 9/403b lim: 85 exec/s: 18 rss: 229Mb L: 49/69 MS: 1 ChangeBinInt- 00:09:39.842 [2024-07-23 08:22:52.320441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:39.842 [2024-07-23 08:22:52.320490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.125 [2024-07-23 08:22:52.380066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.125 [2024-07-23 08:22:52.380104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.125 #20 NEW cov: 12218 ft: 14909 corp: 10/424b lim: 85 exec/s: 20 rss: 230Mb L: 21/69 MS: 1 EraseBytes- 00:09:40.125 [2024-07-23 08:22:52.449675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.125 [2024-07-23 08:22:52.449706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.125 [2024-07-23 08:22:52.449746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.125 [2024-07-23 08:22:52.449762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.125 [2024-07-23 08:22:52.489613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.125 [2024-07-23 08:22:52.489644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.125 [2024-07-23 08:22:52.489698] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.125 [2024-07-23 08:22:52.489715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.125 #22 NEW cov: 12218 ft: 14977 corp: 11/473b lim: 85 exec/s: 22 rss: 232Mb L: 49/69 MS: 1 CrossOver- 00:09:40.125 [2024-07-23 08:22:52.550838] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.126 [2024-07-23 08:22:52.550878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.126 [2024-07-23 08:22:52.550929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.126 [2024-07-23 08:22:52.550950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.126 [2024-07-23 08:22:52.551013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:40.126 [2024-07-23 08:22:52.551033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.126 [2024-07-23 08:22:52.610632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.126 [2024-07-23 08:22:52.610663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.126 [2024-07-23 08:22:52.610714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.126 [2024-07-23 08:22:52.610732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.126 [2024-07-23 08:22:52.610786] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:40.126 [2024-07-23 08:22:52.610801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.419 #24 NEW cov: 12218 ft: 15264 corp: 12/527b lim: 85 exec/s: 24 rss: 233Mb L: 54/69 MS: 1 InsertRepeatedBytes- 00:09:40.419 [2024-07-23 08:22:52.661336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-23 08:22:52.661375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-23 08:22:52.661426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-23 08:22:52.661446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 [2024-07-23 08:22:52.701349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-23 08:22:52.701381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-23 08:22:52.701427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-23 08:22:52.701446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.419 #26 NEW cov: 12218 ft: 15296 corp: 13/577b lim: 85 exec/s: 26 rss: 234Mb L: 50/69 MS: 1 ShuffleBytes- 00:09:40.419 [2024-07-23 08:22:52.762928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.419 [2024-07-23 08:22:52.762966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.419 [2024-07-23 08:22:52.763013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.419 [2024-07-23 08:22:52.763034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.420 [2024-07-23 08:22:52.802508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.420 [2024-07-23 08:22:52.802540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.420 [2024-07-23 08:22:52.802584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.420 [2024-07-23 08:22:52.802601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.420 #28 NEW cov: 12218 ft: 15322 corp: 14/627b lim: 85 exec/s: 28 rss: 236Mb L: 50/69 MS: 1 CrossOver- 00:09:40.420 [2024-07-23 08:22:52.864227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.420 [2024-07-23 08:22:52.864268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.420 [2024-07-23 08:22:52.864337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.420 [2024-07-23 08:22:52.864356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:52.923791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.679 [2024-07-23 08:22:52.923823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:52.923867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.679 [2024-07-23 08:22:52.923887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.679 #30 NEW cov: 12218 ft: 15369 corp: 15/676b lim: 85 exec/s: 30 rss: 237Mb L: 49/69 MS: 1 ShuffleBytes- 00:09:40.679 [2024-07-23 08:22:52.995665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.679 [2024-07-23 08:22:52.995698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:52.995736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.679 [2024-07-23 08:22:52.995753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.679 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:40.679 [2024-07-23 08:22:53.035651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.679 [2024-07-23 08:22:53.035682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:53.035727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.679 [2024-07-23 08:22:53.035744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.679 #32 NEW cov: 12235 ft: 15400 corp: 16/726b lim: 85 exec/s: 32 rss: 239Mb L: 50/69 MS: 1 PersAutoDict- DE: "\001\000\177\356\327\014\2008"- 00:09:40.679 [2024-07-23 08:22:53.107444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.679 [2024-07-23 08:22:53.107476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:53.107523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.679 [2024-07-23 08:22:53.107540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:53.147425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.679 [2024-07-23 08:22:53.147455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.679 [2024-07-23 08:22:53.147508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.679 [2024-07-23 08:22:53.147525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.679 #34 NEW cov: 12235 ft: 15435 corp: 17/775b lim: 85 exec/s: 34 rss: 239Mb L: 49/69 MS: 1 CopyPart- 00:09:40.938 [2024-07-23 08:22:53.207162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.938 [2024-07-23 08:22:53.207197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.938 [2024-07-23 08:22:53.207243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.938 [2024-07-23 08:22:53.207260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.938 [2024-07-23 08:22:53.246909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:40.938 [2024-07-23 08:22:53.246941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.938 [2024-07-23 08:22:53.246984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:40.938 [2024-07-23 08:22:53.247000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.938 #41 NEW cov: 12235 ft: 15437 corp: 18/825b lim: 85 exec/s: 20 rss: 241Mb L: 50/69 MS: 1 ChangeBinInt- 00:09:40.938 #41 DONE cov: 12235 ft: 15437 corp: 18/825b lim: 85 exec/s: 20 rss: 241Mb 00:09:40.938 ###### Recommended dictionary. ###### 00:09:40.938 "\001\000\177\356\327\014\2008" # Uses: 2 00:09:40.938 ###### End of recommended dictionary. ###### 00:09:40.938 Done 41 runs in 2 second(s) 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:41.201 08:22:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:09:41.460 [2024-07-23 08:22:53.747883] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:41.460 [2024-07-23 08:22:53.747988] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633934 ] 00:09:41.460 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.460 [2024-07-23 08:22:53.966929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.719 [2024-07-23 08:22:54.119211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.978 [2024-07-23 08:22:54.361503] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.978 [2024-07-23 08:22:54.377776] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:41.978 INFO: Running with entropic power schedule (0xFF, 100). 00:09:41.978 INFO: Seed: 4042455787 00:09:41.978 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:41.978 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:41.978 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:41.978 INFO: A corpus is not provided, starting from an empty corpus 00:09:41.978 #2 INITED exec/s: 0 rss: 202Mb 00:09:41.978 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:41.978 This may also happen if the target rejected all inputs we tried so far 00:09:41.978 [2024-07-23 08:22:54.433363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:41.978 [2024-07-23 08:22:54.433410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.237 NEW_FUNC[1/699]: 0x57b3f0 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:42.237 NEW_FUNC[2/699]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:42.237 [2024-07-23 08:22:54.614062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.237 [2024-07-23 08:22:54.614141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.237 #4 NEW cov: 12029 ft: 12030 corp: 2/8b lim: 25 exec/s: 0 rss: 218Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:09:42.237 [2024-07-23 08:22:54.690720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.237 [2024-07-23 08:22:54.690767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.237 [2024-07-23 08:22:54.690829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.237 [2024-07-23 08:22:54.690851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.237 [2024-07-23 08:22:54.690920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.237 [2024-07-23 08:22:54.690941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.237 [2024-07-23 08:22:54.730354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.237 [2024-07-23 08:22:54.730385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.237 [2024-07-23 08:22:54.730428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.237 [2024-07-23 08:22:54.730446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.237 [2024-07-23 08:22:54.730504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.237 [2024-07-23 08:22:54.730521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.496 #8 NEW cov: 12053 ft: 12868 corp: 3/27b lim: 25 exec/s: 0 rss: 219Mb L: 19/19 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:09:42.496 [2024-07-23 08:22:54.790732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.496 [2024-07-23 08:22:54.790770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.790825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.496 [2024-07-23 08:22:54.790849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.850499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.496 [2024-07-23 08:22:54.850530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.850573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.496 [2024-07-23 08:22:54.850589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.496 #10 NEW cov: 12065 ft: 13346 corp: 4/39b lim: 25 exec/s: 0 rss: 222Mb L: 12/19 MS: 1 CopyPart- 00:09:42.496 [2024-07-23 08:22:54.914862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.496 [2024-07-23 08:22:54.914898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.914952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.496 [2024-07-23 08:22:54.914972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.915034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.496 [2024-07-23 08:22:54.915052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.954796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.496 [2024-07-23 08:22:54.954826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.954876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.496 [2024-07-23 08:22:54.954893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.496 [2024-07-23 08:22:54.954963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.496 [2024-07-23 08:22:54.954980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.496 #13 NEW cov: 12151 ft: 13798 corp: 5/54b lim: 25 exec/s: 0 rss: 224Mb L: 15/19 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:42.754 [2024-07-23 08:22:55.023619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.755 [2024-07-23 08:22:55.023657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.023705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.755 [2024-07-23 08:22:55.023723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.023787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.755 [2024-07-23 08:22:55.023805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.023868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:42.755 [2024-07-23 08:22:55.023885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.083610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.755 [2024-07-23 08:22:55.083642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.083693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.755 [2024-07-23 08:22:55.083711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.083769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.755 [2024-07-23 08:22:55.083786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.083844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:42.755 [2024-07-23 08:22:55.083859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.755 #15 NEW cov: 12151 ft: 14364 corp: 6/76b lim: 25 exec/s: 0 rss: 225Mb L: 22/22 MS: 1 InsertRepeatedBytes- 00:09:42.755 [2024-07-23 08:22:55.139627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.755 [2024-07-23 08:22:55.139681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.139742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.755 [2024-07-23 08:22:55.139769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.139850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.755 [2024-07-23 08:22:55.139876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.755 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:42.755 [2024-07-23 08:22:55.199042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.755 [2024-07-23 08:22:55.199072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.199118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.755 [2024-07-23 08:22:55.199135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.199192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:42.755 [2024-07-23 08:22:55.199207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.755 #17 NEW cov: 12168 ft: 14460 corp: 7/93b lim: 25 exec/s: 0 rss: 227Mb L: 17/22 MS: 1 EraseBytes- 00:09:42.755 [2024-07-23 08:22:55.252947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:42.755 [2024-07-23 08:22:55.252979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.755 [2024-07-23 08:22:55.253026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:42.755 [2024-07-23 08:22:55.253044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.312710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.013 [2024-07-23 08:22:55.312742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.312788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.013 [2024-07-23 08:22:55.312805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.013 #19 NEW cov: 12168 ft: 14534 corp: 8/105b lim: 25 exec/s: 0 rss: 228Mb L: 12/22 MS: 1 ShuffleBytes- 00:09:43.013 [2024-07-23 08:22:55.359985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.013 [2024-07-23 08:22:55.360029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.399723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.013 [2024-07-23 08:22:55.399756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.013 #21 NEW cov: 12168 ft: 14604 corp: 9/112b lim: 25 exec/s: 21 rss: 230Mb L: 7/22 MS: 1 ChangeBit- 00:09:43.013 [2024-07-23 08:22:55.459215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.013 [2024-07-23 08:22:55.459253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.459318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.013 [2024-07-23 08:22:55.459334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.459400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.013 [2024-07-23 08:22:55.459418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.498796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.013 [2024-07-23 08:22:55.498828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.498874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.013 [2024-07-23 08:22:55.498892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.013 [2024-07-23 08:22:55.498950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.013 [2024-07-23 08:22:55.498965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.271 #23 NEW cov: 12168 ft: 14759 corp: 10/127b lim: 25 exec/s: 23 rss: 230Mb L: 15/22 MS: 1 ChangeBinInt- 00:09:43.271 [2024-07-23 08:22:55.559912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.271 [2024-07-23 08:22:55.559949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.271 [2024-07-23 08:22:55.560009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.271 [2024-07-23 08:22:55.560028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.271 [2024-07-23 08:22:55.560090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.271 [2024-07-23 08:22:55.560115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.271 [2024-07-23 08:22:55.599569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.271 [2024-07-23 08:22:55.599600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.271 [2024-07-23 08:22:55.599664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.271 [2024-07-23 08:22:55.599681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.271 [2024-07-23 08:22:55.599736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.271 [2024-07-23 08:22:55.599751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.271 #30 NEW cov: 12168 ft: 14841 corp: 11/146b lim: 25 exec/s: 30 rss: 232Mb L: 19/22 MS: 1 CMP- DE: "\004\000\000\000"- 00:09:43.271 [2024-07-23 08:22:55.660352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.271 [2024-07-23 08:22:55.660390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.271 [2024-07-23 08:22:55.720171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.271 [2024-07-23 08:22:55.720202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.271 #32 NEW cov: 12168 ft: 14908 corp: 12/153b lim: 25 exec/s: 32 rss: 233Mb L: 7/22 MS: 1 ChangeBit- 00:09:43.529 [2024-07-23 08:22:55.792061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.529 [2024-07-23 08:22:55.792094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.792151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.529 [2024-07-23 08:22:55.792168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.792225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.529 [2024-07-23 08:22:55.792240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.792297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:43.529 [2024-07-23 08:22:55.792312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.832014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.529 [2024-07-23 08:22:55.832044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.832115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.529 [2024-07-23 08:22:55.832134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.832189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.529 [2024-07-23 08:22:55.832205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.832260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:43.529 [2024-07-23 08:22:55.832276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.529 #34 NEW cov: 12168 ft: 14981 corp: 13/176b lim: 25 exec/s: 34 rss: 235Mb L: 23/23 MS: 1 CMP- DE: "\000\000\000\016"- 00:09:43.529 [2024-07-23 08:22:55.902275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.529 [2024-07-23 08:22:55.902308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:55.942384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.529 [2024-07-23 08:22:55.942414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.529 #36 NEW cov: 12168 ft: 15005 corp: 14/183b lim: 25 exec/s: 36 rss: 235Mb L: 7/23 MS: 1 ChangeByte- 00:09:43.529 [2024-07-23 08:22:56.013574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.529 [2024-07-23 08:22:56.013606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.529 [2024-07-23 08:22:56.013678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.529 [2024-07-23 08:22:56.013695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.073664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.788 [2024-07-23 08:22:56.073695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.073736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.788 [2024-07-23 08:22:56.073753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.788 #38 NEW cov: 12168 ft: 15023 corp: 15/195b lim: 25 exec/s: 38 rss: 237Mb L: 12/23 MS: 1 ChangeBinInt- 00:09:43.788 [2024-07-23 08:22:56.136042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.788 [2024-07-23 08:22:56.136079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.136146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.788 [2024-07-23 08:22:56.136163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.136233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.788 [2024-07-23 08:22:56.136253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.136316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:43.788 [2024-07-23 08:22:56.136334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.195668] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.788 [2024-07-23 08:22:56.195698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.195751] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.788 [2024-07-23 08:22:56.195767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.195821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.788 [2024-07-23 08:22:56.195836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.195894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:43.788 [2024-07-23 08:22:56.195908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.788 #40 NEW cov: 12168 ft: 15068 corp: 16/218b lim: 25 exec/s: 40 rss: 239Mb L: 23/23 MS: 1 ShuffleBytes- 00:09:43.788 [2024-07-23 08:22:56.266310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:43.788 [2024-07-23 08:22:56.266341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.266386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:43.788 [2024-07-23 08:22:56.266403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.788 [2024-07-23 08:22:56.266463] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:43.788 [2024-07-23 08:22:56.266483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.047 [2024-07-23 08:22:56.326416] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:44.047 [2024-07-23 08:22:56.326447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.047 [2024-07-23 08:22:56.326502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:44.047 [2024-07-23 08:22:56.326518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.047 [2024-07-23 08:22:56.326578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:44.047 [2024-07-23 08:22:56.326593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.047 #42 NEW cov: 12168 ft: 15127 corp: 17/234b lim: 25 exec/s: 42 rss: 240Mb L: 16/23 MS: 1 InsertByte- 00:09:44.047 [2024-07-23 08:22:56.388334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:44.047 [2024-07-23 08:22:56.388373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.047 [2024-07-23 08:22:56.388444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:44.047 [2024-07-23 08:22:56.388462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.047 [2024-07-23 08:22:56.427833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:44.047 [2024-07-23 08:22:56.427863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.047 [2024-07-23 08:22:56.427905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:44.047 [2024-07-23 08:22:56.427921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.047 #44 NEW cov: 12168 ft: 15138 corp: 18/247b lim: 25 exec/s: 22 rss: 241Mb L: 13/23 MS: 1 InsertByte- 00:09:44.047 #44 DONE cov: 12168 ft: 15138 corp: 18/247b lim: 25 exec/s: 22 rss: 241Mb 00:09:44.047 ###### Recommended dictionary. ###### 00:09:44.047 "\004\000\000\000" # Uses: 0 00:09:44.047 "\000\000\000\016" # Uses: 0 00:09:44.047 ###### End of recommended dictionary. ###### 00:09:44.047 Done 44 runs in 2 second(s) 00:09:44.613 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:44.614 08:22:56 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:09:44.614 [2024-07-23 08:22:56.918130] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:44.614 [2024-07-23 08:22:56.918236] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634507 ] 00:09:44.614 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.872 [2024-07-23 08:22:57.140923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.872 [2024-07-23 08:22:57.289938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.130 [2024-07-23 08:22:57.527967] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.130 [2024-07-23 08:22:57.544241] tcp.c:1007:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:45.130 INFO: Running with entropic power schedule (0xFF, 100). 00:09:45.130 INFO: Seed: 2912475052 00:09:45.130 INFO: Loaded 1 modules (362333 inline 8-bit counters): 362333 [0x35c9d8c, 0x36224e9), 00:09:45.130 INFO: Loaded 1 PC tables (362333 PCs): 362333 [0x36224f0,0x3ba9ac0), 00:09:45.130 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:45.130 INFO: A corpus is not provided, starting from an empty corpus 00:09:45.130 #2 INITED exec/s: 0 rss: 201Mb 00:09:45.130 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:45.130 This may also happen if the target rejected all inputs we tried so far 00:09:45.130 [2024-07-23 08:22:57.589732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.130 [2024-07-23 08:22:57.589775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.130 [2024-07-23 08:22:57.589830] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.130 [2024-07-23 08:22:57.589850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.130 [2024-07-23 08:22:57.589883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.130 [2024-07-23 08:22:57.589901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.389 NEW_FUNC[1/700]: 0x57c6f0 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:45.389 NEW_FUNC[2/700]: 0x58f640 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:45.389 [2024-07-23 08:22:57.800284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.389 [2024-07-23 08:22:57.800340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.389 [2024-07-23 08:22:57.800404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.389 [2024-07-23 08:22:57.800424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.389 [2024-07-23 08:22:57.800456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.389 [2024-07-23 08:22:57.800473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.389 #10 NEW cov: 12095 ft: 12096 corp: 2/73b lim: 100 exec/s: 0 rss: 218Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:09:45.389 [2024-07-23 08:22:57.877062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.389 [2024-07-23 08:22:57.877124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.646 [2024-07-23 08:22:57.946689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.646 [2024-07-23 08:22:57.946728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.646 #15 NEW cov: 12125 ft: 13410 corp: 3/100b lim: 100 exec/s: 0 rss: 220Mb L: 27/72 MS: 4 InsertRepeatedBytes-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:09:45.646 [2024-07-23 08:22:58.010437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.646 [2024-07-23 08:22:58.010497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.646 [2024-07-23 08:22:58.010555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.646 [2024-07-23 08:22:58.010585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.646 [2024-07-23 08:22:58.069128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069583077375 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.646 [2024-07-23 08:22:58.069167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.647 [2024-07-23 08:22:58.069208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.647 [2024-07-23 08:22:58.069229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.647 #20 NEW cov: 12137 ft: 13981 corp: 4/152b lim: 100 exec/s: 0 rss: 222Mb L: 52/72 MS: 4 ShuffleBytes-InsertByte-CopyPart-InsertRepeatedBytes- 00:09:45.647 [2024-07-23 08:22:58.121299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.647 [2024-07-23 08:22:58.121358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.905 [2024-07-23 08:22:58.220604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.220643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.905 #22 NEW cov: 12223 ft: 14377 corp: 5/177b lim: 100 exec/s: 0 rss: 223Mb L: 25/72 MS: 1 EraseBytes- 00:09:45.905 [2024-07-23 08:22:58.284848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.284905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.905 [2024-07-23 08:22:58.284962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.284991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.905 [2024-07-23 08:22:58.285048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.285074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.905 NEW_FUNC[1/1]: 0x1e893c0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:45.905 [2024-07-23 08:22:58.374223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.374259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.905 [2024-07-23 08:22:58.374312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.374332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.905 [2024-07-23 08:22:58.374367] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.905 [2024-07-23 08:22:58.374384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.905 #24 NEW cov: 12240 ft: 14623 corp: 6/249b lim: 100 exec/s: 0 rss: 226Mb L: 72/72 MS: 1 ChangeByte- 00:09:46.164 [2024-07-23 08:22:58.449066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.449128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.164 [2024-07-23 08:22:58.508689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.508724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.164 #26 NEW cov: 12240 ft: 14696 corp: 7/276b lim: 100 exec/s: 0 rss: 227Mb L: 27/72 MS: 1 ChangeByte- 00:09:46.164 [2024-07-23 08:22:58.579614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.579658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.164 [2024-07-23 08:22:58.579707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17481 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.579732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.164 [2024-07-23 08:22:58.579775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.579797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.164 [2024-07-23 08:22:58.639466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.639501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.164 [2024-07-23 08:22:58.639539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17481 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.639559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.164 [2024-07-23 08:22:58.639594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.164 [2024-07-23 08:22:58.639612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.164 #28 NEW cov: 12240 ft: 14797 corp: 8/348b lim: 100 exec/s: 28 rss: 228Mb L: 72/72 MS: 1 ChangeBinInt- 00:09:46.421 [2024-07-23 08:22:58.711090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131754549494852 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.711143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.711192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.711219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.711264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.711287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.770643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131754549494852 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.770677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.770734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.770755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.770790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.770809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.421 #35 NEW cov: 12240 ft: 14813 corp: 9/420b lim: 100 exec/s: 35 rss: 229Mb L: 72/72 MS: 1 ChangeByte- 00:09:46.421 [2024-07-23 08:22:58.841856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.841902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.841952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17481 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.841978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.842028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.842055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.931661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.931697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.931736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17481 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.931756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.421 [2024-07-23 08:22:58.931791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.421 [2024-07-23 08:22:58.931809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.679 #47 NEW cov: 12240 ft: 14839 corp: 10/492b lim: 100 exec/s: 47 rss: 230Mb L: 72/72 MS: 1 ShuffleBytes- 00:09:46.679 [2024-07-23 08:22:58.993541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.679 [2024-07-23 08:22:58.993599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.679 [2024-07-23 08:22:59.082761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.679 [2024-07-23 08:22:59.082795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.679 #49 NEW cov: 12240 ft: 14920 corp: 11/517b lim: 100 exec/s: 49 rss: 232Mb L: 25/72 MS: 1 ChangeByte- 00:09:46.679 [2024-07-23 08:22:59.145617] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.679 [2024-07-23 08:22:59.145674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.679 [2024-07-23 08:22:59.145739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919057793652376644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.679 [2024-07-23 08:22:59.145774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.679 [2024-07-23 08:22:59.145840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.679 [2024-07-23 08:22:59.145871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.679 [2024-07-23 08:22:59.145928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.679 [2024-07-23 08:22:59.145960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.937 [2024-07-23 08:22:59.234743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.937 [2024-07-23 08:22:59.234778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.937 [2024-07-23 08:22:59.234815] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919057793652376644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.937 [2024-07-23 08:22:59.234839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.937 [2024-07-23 08:22:59.234878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.937 [2024-07-23 08:22:59.234912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.937 [2024-07-23 08:22:59.234945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.937 [2024-07-23 08:22:59.234964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:46.937 #51 NEW cov: 12240 ft: 15301 corp: 12/597b lim: 100 exec/s: 51 rss: 233Mb L: 80/80 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\002"- 00:09:46.937 [2024-07-23 08:22:59.306466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.937 [2024-07-23 08:22:59.306512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.937 [2024-07-23 08:22:59.396418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:17723342341386139093 len:62966 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:46.937 [2024-07-23 08:22:59.396457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.937 #58 NEW cov: 12240 ft: 15328 corp: 13/624b lim: 100 exec/s: 58 rss: 234Mb L: 27/80 MS: 1 CrossOver- 00:09:47.196 [2024-07-23 08:22:59.470283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131754549494852 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.196 [2024-07-23 08:22:59.470333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.196 [2024-07-23 08:22:59.470387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.196 [2024-07-23 08:22:59.470414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.196 [2024-07-23 08:22:59.470460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.196 [2024-07-23 08:22:59.470484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.196 [2024-07-23 08:22:59.559915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4919131754549494852 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.196 [2024-07-23 08:22:59.559951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.196 [2024-07-23 08:22:59.560009] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.196 [2024-07-23 08:22:59.560030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.196 [2024-07-23 08:22:59.560064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4919131752989213764 len:17477 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:47.196 [2024-07-23 08:22:59.560083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.196 #65 NEW cov: 12240 ft: 15358 corp: 14/702b lim: 100 exec/s: 32 rss: 236Mb L: 78/80 MS: 1 CopyPart- 00:09:47.196 #65 DONE cov: 12240 ft: 15358 corp: 14/702b lim: 100 exec/s: 32 rss: 236Mb 00:09:47.196 ###### Recommended dictionary. ###### 00:09:47.196 "\001\000\000\000\000\000\000\002" # Uses: 0 00:09:47.196 ###### End of recommended dictionary. ###### 00:09:47.196 Done 65 runs in 2 second(s) 00:09:47.765 08:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:09:47.765 08:23:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:47.765 08:23:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:47.765 08:23:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:09:47.765 00:09:47.765 real 1m20.102s 00:09:47.765 user 1m44.632s 00:09:47.765 sys 0m9.757s 00:09:47.765 08:23:00 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.765 08:23:00 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:47.765 ************************************ 00:09:47.765 END TEST nvmf_llvm_fuzz 00:09:47.765 ************************************ 00:09:47.765 08:23:00 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:09:47.765 08:23:00 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:09:47.765 08:23:00 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:09:47.765 08:23:00 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:47.765 08:23:00 llvm_fuzz -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:47.765 08:23:00 llvm_fuzz -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.765 08:23:00 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:47.765 ************************************ 00:09:47.765 START TEST vfio_llvm_fuzz 00:09:47.765 ************************************ 00:09:47.765 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:47.765 * Looking for test storage... 00:09:47.765 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:47.765 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:47.765 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:47.765 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:47.765 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:47.766 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:47.766 #define SPDK_CONFIG_H 00:09:47.766 #define SPDK_CONFIG_APPS 1 00:09:47.766 #define SPDK_CONFIG_ARCH native 00:09:47.766 #define SPDK_CONFIG_ASAN 1 00:09:47.766 #undef SPDK_CONFIG_AVAHI 00:09:47.766 #undef SPDK_CONFIG_CET 00:09:47.766 #define SPDK_CONFIG_COVERAGE 1 00:09:47.766 #define SPDK_CONFIG_CROSS_PREFIX 00:09:47.766 #undef SPDK_CONFIG_CRYPTO 00:09:47.766 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:47.766 #undef SPDK_CONFIG_CUSTOMOCF 00:09:47.766 #undef SPDK_CONFIG_DAOS 00:09:47.766 #define SPDK_CONFIG_DAOS_DIR 00:09:47.766 #define SPDK_CONFIG_DEBUG 1 00:09:47.766 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:47.766 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:47.766 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:47.767 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:47.767 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:47.767 #undef SPDK_CONFIG_DPDK_UADK 00:09:47.767 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:47.767 #define SPDK_CONFIG_EXAMPLES 1 00:09:47.767 #undef SPDK_CONFIG_FC 00:09:47.767 #define SPDK_CONFIG_FC_PATH 00:09:47.767 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:47.767 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:47.767 #undef SPDK_CONFIG_FUSE 00:09:47.767 #define SPDK_CONFIG_FUZZER 1 00:09:47.767 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:47.767 #undef SPDK_CONFIG_GOLANG 00:09:47.767 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:47.767 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:47.767 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:47.767 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:47.767 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:47.767 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:47.767 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:47.767 #define SPDK_CONFIG_IDXD 1 00:09:47.767 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:47.767 #undef SPDK_CONFIG_IPSEC_MB 00:09:47.767 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:47.767 #define SPDK_CONFIG_ISAL 1 00:09:47.767 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:47.767 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:47.767 #define SPDK_CONFIG_LIBDIR 00:09:47.767 #undef SPDK_CONFIG_LTO 00:09:47.767 #define SPDK_CONFIG_MAX_LCORES 128 00:09:47.767 #define SPDK_CONFIG_NVME_CUSE 1 00:09:47.767 #undef SPDK_CONFIG_OCF 00:09:47.767 #define SPDK_CONFIG_OCF_PATH 00:09:47.767 #define SPDK_CONFIG_OPENSSL_PATH 00:09:47.767 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:47.767 #define SPDK_CONFIG_PGO_DIR 00:09:47.767 #undef SPDK_CONFIG_PGO_USE 00:09:47.767 #define SPDK_CONFIG_PREFIX /usr/local 00:09:47.767 #undef SPDK_CONFIG_RAID5F 00:09:47.767 #undef SPDK_CONFIG_RBD 00:09:47.767 #define SPDK_CONFIG_RDMA 1 00:09:47.767 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:47.767 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:47.767 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:47.767 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:47.767 #undef SPDK_CONFIG_SHARED 00:09:47.767 #undef SPDK_CONFIG_SMA 00:09:47.767 #define SPDK_CONFIG_TESTS 1 00:09:47.767 #undef SPDK_CONFIG_TSAN 00:09:47.767 #define SPDK_CONFIG_UBLK 1 00:09:47.767 #define SPDK_CONFIG_UBSAN 1 00:09:47.767 #undef SPDK_CONFIG_UNIT_TESTS 00:09:47.767 #undef SPDK_CONFIG_URING 00:09:47.767 #define SPDK_CONFIG_URING_PATH 00:09:47.767 #undef SPDK_CONFIG_URING_ZNS 00:09:47.767 #undef SPDK_CONFIG_USDT 00:09:47.767 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:47.767 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:47.767 #define SPDK_CONFIG_VFIO_USER 1 00:09:47.767 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:47.767 #define SPDK_CONFIG_VHOST 1 00:09:47.767 #define SPDK_CONFIG_VIRTIO 1 00:09:47.767 #undef SPDK_CONFIG_VTUNE 00:09:47.767 #define SPDK_CONFIG_VTUNE_DIR 00:09:47.767 #define SPDK_CONFIG_WERROR 1 00:09:47.767 #define SPDK_CONFIG_WPDK_DIR 00:09:47.767 #undef SPDK_CONFIG_XNVME 00:09:47.767 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:47.767 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 1 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 1 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : true 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # : 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:47.768 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@200 -- # cat 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j88 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # [[ -z 635070 ]] 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@318 -- # kill -0 635070 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:47.769 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.S02cFH 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.S02cFH/tests/vfio /tmp/spdk.S02cFH 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # df -T 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=948682752 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4335747072 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=53622800384 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742075904 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=8119275520 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30866325504 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871035904 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=12342378496 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348416000 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=6037504 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=30870609920 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871040000 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=430080 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=6174199808 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174203904 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:48.029 * Looking for test storage... 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # target_space=53622800384 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@381 -- # new_size=10333868032 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:48.029 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # return 0 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1687 -- # true 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:48.029 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:48.030 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:48.030 08:23:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:48.030 [2024-07-23 08:23:00.376483] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:48.030 [2024-07-23 08:23:00.376570] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635109 ] 00:09:48.030 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.030 [2024-07-23 08:23:00.499914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.287 [2024-07-23 08:23:00.664770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.853 INFO: Running with entropic power schedule (0xFF, 100). 00:09:48.853 INFO: Seed: 2154517407 00:09:48.853 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:09:48.853 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:09:48.853 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:48.853 INFO: A corpus is not provided, starting from an empty corpus 00:09:48.853 #2 INITED exec/s: 0 rss: 212Mb 00:09:48.853 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:48.853 This may also happen if the target rejected all inputs we tried so far 00:09:48.853 [2024-07-23 08:23:01.158664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:09:49.111 NEW_FUNC[1/656]: 0x54ae60 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:09:49.111 NEW_FUNC[2/656]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:49.111 #7 NEW cov: 11048 ft: 11023 corp: 2/7b lim: 6 exec/s: 0 rss: 226Mb L: 6/6 MS: 4 CrossOver-ChangeBit-CopyPart-InsertRepeatedBytes- 00:09:49.369 NEW_FUNC[1/4]: 0x1a87110 in nvme_pcie_ctrlr /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_pcie_internal.h:213 00:09:49.369 NEW_FUNC[2/4]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:49.627 #9 NEW cov: 11087 ft: 14362 corp: 3/13b lim: 6 exec/s: 0 rss: 227Mb L: 6/6 MS: 1 ShuffleBytes- 00:09:50.193 #11 NEW cov: 11087 ft: 15631 corp: 4/19b lim: 6 exec/s: 11 rss: 228Mb L: 6/6 MS: 1 CopyPart- 00:09:50.451 #13 NEW cov: 11087 ft: 15924 corp: 5/25b lim: 6 exec/s: 13 rss: 229Mb L: 6/6 MS: 1 CopyPart- 00:09:51.019 #15 NEW cov: 11087 ft: 15957 corp: 6/31b lim: 6 exec/s: 7 rss: 230Mb L: 6/6 MS: 1 ChangeBit- 00:09:51.019 #15 DONE cov: 11087 ft: 15957 corp: 6/31b lim: 6 exec/s: 7 rss: 230Mb 00:09:51.019 Done 15 runs in 2 second(s) 00:09:51.019 [2024-07-23 08:23:03.285901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:09:51.954 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:09:51.954 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:51.954 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:09:51.955 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:51.955 08:23:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:09:51.955 [2024-07-23 08:23:04.291647] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:51.955 [2024-07-23 08:23:04.291749] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635765 ] 00:09:51.955 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.955 [2024-07-23 08:23:04.408255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.213 [2024-07-23 08:23:04.569134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.472 INFO: Running with entropic power schedule (0xFF, 100). 00:09:52.472 INFO: Seed: 1748551150 00:09:52.730 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:09:52.731 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:09:52.731 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:09:52.731 INFO: A corpus is not provided, starting from an empty corpus 00:09:52.731 #2 INITED exec/s: 0 rss: 212Mb 00:09:52.731 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:52.731 This may also happen if the target rejected all inputs we tried so far 00:09:52.731 [2024-07-23 08:23:05.037361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:09:52.731 [2024-07-23 08:23:05.116612] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:52.731 [2024-07-23 08:23:05.116640] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:52.731 [2024-07-23 08:23:05.116685] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:52.989 NEW_FUNC[1/658]: 0x54b5e0 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:09:52.989 NEW_FUNC[2/658]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:52.989 [2024-07-23 08:23:05.449081] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:52.989 [2024-07-23 08:23:05.449135] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:52.989 [2024-07-23 08:23:05.449156] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:53.248 #63 NEW cov: 11033 ft: 10815 corp: 2/5b lim: 4 exec/s: 0 rss: 226Mb L: 4/4 MS: 3 ChangeBit-CopyPart-CopyPart- 00:09:53.248 [2024-07-23 08:23:05.690951] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:53.248 [2024-07-23 08:23:05.690983] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:53.248 [2024-07-23 08:23:05.691004] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:53.507 NEW_FUNC[1/3]: 0x1afcc50 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:09:53.507 NEW_FUNC[2/3]: 0x1afe6b0 in spdk_nvme_qpair_process_completions /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:757 00:09:53.507 [2024-07-23 08:23:05.902567] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:53.507 [2024-07-23 08:23:05.902593] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:53.507 [2024-07-23 08:23:05.902610] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:53.766 #70 NEW cov: 11071 ft: 13783 corp: 3/9b lim: 4 exec/s: 70 rss: 226Mb L: 4/4 MS: 1 ShuffleBytes- 00:09:53.766 [2024-07-23 08:23:06.140299] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:53.767 [2024-07-23 08:23:06.140327] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:53.767 [2024-07-23 08:23:06.140347] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.026 [2024-07-23 08:23:06.335228] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.026 [2024-07-23 08:23:06.335255] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.026 [2024-07-23 08:23:06.335272] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.026 #72 NEW cov: 11071 ft: 14105 corp: 4/13b lim: 4 exec/s: 72 rss: 227Mb L: 4/4 MS: 1 CrossOver- 00:09:54.285 [2024-07-23 08:23:06.575603] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.285 [2024-07-23 08:23:06.575632] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.285 [2024-07-23 08:23:06.575653] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.285 NEW_FUNC[1/1]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:54.285 [2024-07-23 08:23:06.770731] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.285 [2024-07-23 08:23:06.770756] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.285 [2024-07-23 08:23:06.770773] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.543 #79 NEW cov: 11088 ft: 14276 corp: 5/17b lim: 4 exec/s: 79 rss: 228Mb L: 4/4 MS: 1 CopyPart- 00:09:54.543 [2024-07-23 08:23:07.009248] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.543 [2024-07-23 08:23:07.009275] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.543 [2024-07-23 08:23:07.009295] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:54.802 [2024-07-23 08:23:07.213277] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:09:54.802 [2024-07-23 08:23:07.213303] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:09:54.802 [2024-07-23 08:23:07.213320] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:09:55.060 #81 NEW cov: 11088 ft: 14864 corp: 6/21b lim: 4 exec/s: 40 rss: 230Mb L: 4/4 MS: 1 ChangeByte- 00:09:55.060 #81 DONE cov: 11088 ft: 14864 corp: 6/21b lim: 4 exec/s: 40 rss: 230Mb 00:09:55.060 Done 81 runs in 2 second(s) 00:09:55.060 [2024-07-23 08:23:07.377027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:09:55.997 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:55.997 08:23:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:09:55.997 [2024-07-23 08:23:08.385937] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:55.997 [2024-07-23 08:23:08.386023] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636433 ] 00:09:55.997 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.997 [2024-07-23 08:23:08.500940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.256 [2024-07-23 08:23:08.662602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.824 INFO: Running with entropic power schedule (0xFF, 100). 00:09:56.824 INFO: Seed: 1548577342 00:09:56.824 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:09:56.824 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:09:56.824 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:09:56.824 INFO: A corpus is not provided, starting from an empty corpus 00:09:56.824 #2 INITED exec/s: 0 rss: 213Mb 00:09:56.824 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:56.824 This may also happen if the target rejected all inputs we tried so far 00:09:56.824 [2024-07-23 08:23:09.139903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:09:56.824 [2024-07-23 08:23:09.202875] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.082 NEW_FUNC[1/660]: 0x54c210 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:09:57.082 NEW_FUNC[2/660]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:57.082 [2024-07-23 08:23:09.546586] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.341 #12 NEW cov: 11036 ft: 11001 corp: 2/9b lim: 8 exec/s: 0 rss: 226Mb L: 8/8 MS: 4 ChangeByte-InsertRepeatedBytes-InsertByte-InsertByte- 00:09:57.341 [2024-07-23 08:23:09.791717] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.600 [2024-07-23 08:23:10.004640] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:57.859 #17 NEW cov: 11050 ft: 13842 corp: 3/17b lim: 8 exec/s: 17 rss: 227Mb L: 8/8 MS: 4 CrossOver-ChangeByte-InsertByte-InsertRepeatedBytes- 00:09:57.859 [2024-07-23 08:23:10.247146] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.118 [2024-07-23 08:23:10.443496] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.118 #19 NEW cov: 11050 ft: 15690 corp: 4/25b lim: 8 exec/s: 19 rss: 228Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:58.385 [2024-07-23 08:23:10.684981] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.385 NEW_FUNC[1/1]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:09:58.385 [2024-07-23 08:23:10.902858] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.645 #21 NEW cov: 11067 ft: 15793 corp: 5/33b lim: 8 exec/s: 21 rss: 230Mb L: 8/8 MS: 1 ShuffleBytes- 00:09:58.645 [2024-07-23 08:23:11.140766] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:58.902 [2024-07-23 08:23:11.350304] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:09:59.161 #31 NEW cov: 11067 ft: 16129 corp: 6/41b lim: 8 exec/s: 15 rss: 231Mb L: 8/8 MS: 4 InsertRepeatedBytes-CrossOver-CopyPart-InsertByte- 00:09:59.161 #31 DONE cov: 11067 ft: 16129 corp: 6/41b lim: 8 exec/s: 15 rss: 231Mb 00:09:59.161 Done 31 runs in 2 second(s) 00:09:59.161 [2024-07-23 08:23:11.517053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:10:00.098 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:00.098 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:00.099 08:23:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:10:00.099 [2024-07-23 08:23:12.521711] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:00.099 [2024-07-23 08:23:12.521812] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637088 ] 00:10:00.099 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.358 [2024-07-23 08:23:12.640625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.358 [2024-07-23 08:23:12.800377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.926 INFO: Running with entropic power schedule (0xFF, 100). 00:10:00.926 INFO: Seed: 1395608055 00:10:00.926 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:10:00.926 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:10:00.926 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:00.926 INFO: A corpus is not provided, starting from an empty corpus 00:10:00.926 #2 INITED exec/s: 0 rss: 213Mb 00:10:00.926 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:00.926 This may also happen if the target rejected all inputs we tried so far 00:10:00.926 [2024-07-23 08:23:13.331716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:10:01.185 NEW_FUNC[1/660]: 0x54ca60 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:10:01.185 NEW_FUNC[2/660]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:01.443 #206 NEW cov: 11048 ft: 10961 corp: 2/33b lim: 32 exec/s: 0 rss: 226Mb L: 32/32 MS: 4 InsertRepeatedBytes-CopyPart-InsertByte-CopyPart- 00:10:01.702 NEW_FUNC[1/1]: 0x17469f0 in free_qp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1885 00:10:01.960 #208 NEW cov: 11071 ft: 13183 corp: 3/65b lim: 32 exec/s: 208 rss: 227Mb L: 32/32 MS: 1 CrossOver- 00:10:02.219 #210 NEW cov: 11071 ft: 13821 corp: 4/97b lim: 32 exec/s: 210 rss: 227Mb L: 32/32 MS: 1 CrossOver- 00:10:02.478 NEW_FUNC[1/1]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:02.736 #212 NEW cov: 11088 ft: 14148 corp: 5/129b lim: 32 exec/s: 212 rss: 229Mb L: 32/32 MS: 1 CopyPart- 00:10:03.304 #214 NEW cov: 11090 ft: 14552 corp: 6/161b lim: 32 exec/s: 107 rss: 230Mb L: 32/32 MS: 1 CopyPart- 00:10:03.304 #214 DONE cov: 11090 ft: 14552 corp: 6/161b lim: 32 exec/s: 107 rss: 230Mb 00:10:03.304 Done 214 runs in 2 second(s) 00:10:03.304 [2024-07-23 08:23:15.594072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:10:04.242 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:04.242 08:23:16 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:10:04.242 [2024-07-23 08:23:16.597997] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:04.242 [2024-07-23 08:23:16.598105] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637741 ] 00:10:04.242 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.242 [2024-07-23 08:23:16.713968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.501 [2024-07-23 08:23:16.875356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.069 INFO: Running with entropic power schedule (0xFF, 100). 00:10:05.069 INFO: Seed: 1169632572 00:10:05.069 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:10:05.069 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:10:05.069 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:05.069 INFO: A corpus is not provided, starting from an empty corpus 00:10:05.069 #2 INITED exec/s: 0 rss: 212Mb 00:10:05.069 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:05.069 This may also happen if the target rejected all inputs we tried so far 00:10:05.069 [2024-07-23 08:23:17.351308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:10:05.328 NEW_FUNC[1/659]: 0x54d5c0 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:10:05.328 NEW_FUNC[2/659]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:05.586 #121 NEW cov: 10993 ft: 10657 corp: 2/33b lim: 32 exec/s: 0 rss: 226Mb L: 32/32 MS: 4 ShuffleBytes-CopyPart-CrossOver-InsertRepeatedBytes- 00:10:05.586 NEW_FUNC[1/2]: 0x17469f0 in free_qp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1885 00:10:05.586 NEW_FUNC[2/2]: 0x23ab3a0 in spdk_ioviter_nextv /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/util/iov.c:91 00:10:05.844 #128 NEW cov: 11072 ft: 13159 corp: 3/65b lim: 32 exec/s: 0 rss: 228Mb L: 32/32 MS: 1 ChangeByte- 00:10:06.410 #135 NEW cov: 11072 ft: 14676 corp: 4/97b lim: 32 exec/s: 135 rss: 228Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:06.410 NEW_FUNC[1/1]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:06.668 #137 NEW cov: 11092 ft: 15113 corp: 5/129b lim: 32 exec/s: 137 rss: 230Mb L: 32/32 MS: 1 ShuffleBytes- 00:10:07.234 #139 NEW cov: 11092 ft: 15145 corp: 6/161b lim: 32 exec/s: 69 rss: 231Mb L: 32/32 MS: 1 CMP- DE: ">\001\000\000"- 00:10:07.234 #139 DONE cov: 11092 ft: 15145 corp: 6/161b lim: 32 exec/s: 69 rss: 231Mb 00:10:07.234 ###### Recommended dictionary. ###### 00:10:07.234 ">\001\000\000" # Uses: 0 00:10:07.234 ###### End of recommended dictionary. ###### 00:10:07.234 Done 139 runs in 2 second(s) 00:10:07.234 [2024-07-23 08:23:19.564842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:10:08.171 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:08.171 08:23:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:10:08.171 [2024-07-23 08:23:20.590782] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:08.171 [2024-07-23 08:23:20.590871] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638396 ] 00:10:08.171 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.430 [2024-07-23 08:23:20.706417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.430 [2024-07-23 08:23:20.866150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.998 INFO: Running with entropic power schedule (0xFF, 100). 00:10:08.998 INFO: Seed: 879676973 00:10:08.998 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:10:08.998 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:10:08.998 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:08.998 INFO: A corpus is not provided, starting from an empty corpus 00:10:08.998 #2 INITED exec/s: 0 rss: 212Mb 00:10:08.998 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:08.998 This may also happen if the target rejected all inputs we tried so far 00:10:08.998 [2024-07-23 08:23:21.353182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:10:08.998 [2024-07-23 08:23:21.417151] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:08.998 [2024-07-23 08:23:21.417194] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:09.257 NEW_FUNC[1/661]: 0x54e530 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:10:09.257 NEW_FUNC[2/661]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:09.257 [2024-07-23 08:23:21.765052] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:09.257 [2024-07-23 08:23:21.765104] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:09.515 #8 NEW cov: 11047 ft: 11009 corp: 2/14b lim: 13 exec/s: 0 rss: 226Mb L: 13/13 MS: 5 InsertByte-CrossOver-InsertByte-EraseBytes-InsertRepeatedBytes- 00:10:09.515 [2024-07-23 08:23:22.010664] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:09.515 [2024-07-23 08:23:22.010704] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:09.774 [2024-07-23 08:23:22.215859] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:09.774 [2024-07-23 08:23:22.215892] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:10.032 #15 NEW cov: 11073 ft: 14349 corp: 3/27b lim: 13 exec/s: 15 rss: 226Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:10.032 [2024-07-23 08:23:22.454573] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:10.032 [2024-07-23 08:23:22.454612] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:10.290 [2024-07-23 08:23:22.652282] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:10.290 [2024-07-23 08:23:22.652316] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:10.290 #17 NEW cov: 11073 ft: 14647 corp: 4/40b lim: 13 exec/s: 17 rss: 227Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:10.548 [2024-07-23 08:23:22.890514] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:10.548 [2024-07-23 08:23:22.890550] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:10.548 NEW_FUNC[1/1]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:10.807 [2024-07-23 08:23:23.109026] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:10.807 [2024-07-23 08:23:23.109059] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:10.807 #20 NEW cov: 11090 ft: 14981 corp: 5/53b lim: 13 exec/s: 20 rss: 229Mb L: 13/13 MS: 2 CrossOver-InsertByte- 00:10:11.065 [2024-07-23 08:23:23.345643] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:11.065 [2024-07-23 08:23:23.345686] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:11.065 #32 pulse cov: 11090 ft: 15181 corp: 5/53b lim: 13 exec/s: 16 rss: 229Mb 00:10:11.065 [2024-07-23 08:23:23.552749] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:11.065 [2024-07-23 08:23:23.552781] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:11.323 #33 NEW cov: 11090 ft: 15181 corp: 6/66b lim: 13 exec/s: 16 rss: 231Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:11.323 #33 DONE cov: 11090 ft: 15181 corp: 6/66b lim: 13 exec/s: 16 rss: 231Mb 00:10:11.323 Done 33 runs in 2 second(s) 00:10:11.323 [2024-07-23 08:23:23.709066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:10:12.259 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:12.259 08:23:24 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:10:12.259 [2024-07-23 08:23:24.722458] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:12.259 [2024-07-23 08:23:24.722547] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639052 ] 00:10:12.259 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.517 [2024-07-23 08:23:24.838643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.517 [2024-07-23 08:23:24.997333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.084 INFO: Running with entropic power schedule (0xFF, 100). 00:10:13.084 INFO: Seed: 712703984 00:10:13.084 INFO: Loaded 1 modules (359576 inline 8-bit counters): 359576 [0x358578c, 0x35dd424), 00:10:13.084 INFO: Loaded 1 PC tables (359576 PCs): 359576 [0x35dd428,0x3b59da8), 00:10:13.084 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:13.084 INFO: A corpus is not provided, starting from an empty corpus 00:10:13.084 #2 INITED exec/s: 0 rss: 213Mb 00:10:13.084 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:13.084 This may also happen if the target rejected all inputs we tried so far 00:10:13.084 [2024-07-23 08:23:25.478995] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:10:13.084 [2024-07-23 08:23:25.556465] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.085 [2024-07-23 08:23:25.556505] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.343 NEW_FUNC[1/661]: 0x54f440 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:10:13.343 NEW_FUNC[2/661]: 0x552440 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:13.601 [2024-07-23 08:23:25.894443] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.601 [2024-07-23 08:23:25.894489] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.601 #10 NEW cov: 11048 ft: 11010 corp: 2/10b lim: 9 exec/s: 0 rss: 226Mb L: 9/9 MS: 2 InsertRepeatedBytes-InsertByte- 00:10:13.859 [2024-07-23 08:23:26.139855] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.859 [2024-07-23 08:23:26.139896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.859 [2024-07-23 08:23:26.337000] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.859 [2024-07-23 08:23:26.337033] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.118 #18 NEW cov: 11062 ft: 13405 corp: 3/19b lim: 9 exec/s: 18 rss: 227Mb L: 9/9 MS: 1 CopyPart- 00:10:14.118 [2024-07-23 08:23:26.576866] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.118 [2024-07-23 08:23:26.576904] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.376 [2024-07-23 08:23:26.803130] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.376 [2024-07-23 08:23:26.803164] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.635 #24 NEW cov: 11062 ft: 15437 corp: 4/28b lim: 9 exec/s: 24 rss: 227Mb L: 9/9 MS: 5 ShuffleBytes-CopyPart-InsertByte-ChangeByte-InsertRepeatedBytes- 00:10:14.635 [2024-07-23 08:23:27.048262] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.635 [2024-07-23 08:23:27.048300] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.893 NEW_FUNC[1/1]: 0x1e4ff90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:613 00:10:14.893 [2024-07-23 08:23:27.261000] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.893 [2024-07-23 08:23:27.261033] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.893 #26 NEW cov: 11082 ft: 15712 corp: 5/37b lim: 9 exec/s: 26 rss: 229Mb L: 9/9 MS: 1 CopyPart- 00:10:15.152 [2024-07-23 08:23:27.498954] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:15.152 [2024-07-23 08:23:27.498991] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:15.152 #27 NEW cov: 11082 ft: 16034 corp: 6/46b lim: 9 exec/s: 13 rss: 229Mb L: 9/9 MS: 1 CopyPart- 00:10:15.152 #27 DONE cov: 11082 ft: 16034 corp: 6/46b lim: 9 exec/s: 13 rss: 229Mb 00:10:15.152 Done 27 runs in 2 second(s) 00:10:15.152 [2024-07-23 08:23:27.637609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:10:16.088 08:23:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:10:16.088 08:23:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:16.088 08:23:28 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:16.088 08:23:28 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:10:16.088 00:10:16.088 real 0m28.513s 00:10:16.088 user 0m36.572s 00:10:16.088 sys 0m2.519s 00:10:16.088 08:23:28 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.088 08:23:28 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 ************************************ 00:10:16.088 END TEST vfio_llvm_fuzz 00:10:16.088 ************************************ 00:10:16.347 08:23:28 llvm_fuzz -- common/autotest_common.sh@1142 -- # return 0 00:10:16.347 08:23:28 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:10:16.347 00:10:16.347 real 1m48.848s 00:10:16.347 user 2m21.294s 00:10:16.347 sys 0m12.436s 00:10:16.347 08:23:28 llvm_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.347 08:23:28 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:16.347 ************************************ 00:10:16.347 END TEST llvm_fuzz 00:10:16.347 ************************************ 00:10:16.347 08:23:28 -- common/autotest_common.sh@1142 -- # return 0 00:10:16.347 08:23:28 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:10:16.347 08:23:28 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:10:16.347 08:23:28 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:10:16.347 08:23:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.347 08:23:28 -- common/autotest_common.sh@10 -- # set +x 00:10:16.347 08:23:28 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:10:16.347 08:23:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:10:16.347 08:23:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:10:16.347 08:23:28 -- common/autotest_common.sh@10 -- # set +x 00:10:20.538 INFO: APP EXITING 00:10:20.538 INFO: killing all VMs 00:10:20.538 INFO: killing vhost app 00:10:20.538 INFO: EXIT DONE 00:10:23.829 Waiting for block devices as requested 00:10:23.829 0000:dd:00.0 (8086 0a54): vfio-pci -> nvme 00:10:23.829 0000:df:00.0 (8086 0a54): vfio-pci -> nvme 00:10:23.829 0000:de:00.0 (8086 0953): vfio-pci -> nvme 00:10:23.829 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:24.087 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:24.087 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:24.087 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:24.346 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:24.346 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:24.346 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:24.605 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:24.605 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:24.605 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:24.605 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:24.864 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:24.864 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:24.864 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:25.122 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:25.122 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:25.122 0000:dc:00.0 (8086 0953): vfio-pci -> nvme 00:10:30.394 Cleaning 00:10:30.395 Removing: /dev/shm/spdk_tgt_trace.pid594086 00:10:30.395 Removing: /var/run/dpdk/spdk_pid589738 00:10:30.395 Removing: /var/run/dpdk/spdk_pid591601 00:10:30.395 Removing: /var/run/dpdk/spdk_pid594086 00:10:30.395 Removing: /var/run/dpdk/spdk_pid594900 00:10:30.395 Removing: /var/run/dpdk/spdk_pid595999 00:10:30.395 Removing: /var/run/dpdk/spdk_pid596434 00:10:30.395 Removing: /var/run/dpdk/spdk_pid597554 00:10:30.395 Removing: /var/run/dpdk/spdk_pid597751 00:10:30.395 Removing: /var/run/dpdk/spdk_pid598303 00:10:30.395 Removing: /var/run/dpdk/spdk_pid598826 00:10:30.395 Removing: /var/run/dpdk/spdk_pid599313 00:10:30.395 Removing: /var/run/dpdk/spdk_pid599811 00:10:30.395 Removing: /var/run/dpdk/spdk_pid600304 00:10:30.395 Removing: /var/run/dpdk/spdk_pid600548 00:10:30.395 Removing: /var/run/dpdk/spdk_pid600873 00:10:30.395 Removing: /var/run/dpdk/spdk_pid601249 00:10:30.395 Removing: /var/run/dpdk/spdk_pid601950 00:10:30.395 Removing: /var/run/dpdk/spdk_pid604956 00:10:30.395 Removing: /var/run/dpdk/spdk_pid605437 00:10:30.395 Removing: /var/run/dpdk/spdk_pid606011 00:10:30.395 Removing: /var/run/dpdk/spdk_pid606088 00:10:30.395 Removing: /var/run/dpdk/spdk_pid607195 00:10:30.395 Removing: /var/run/dpdk/spdk_pid607406 00:10:30.395 Removing: /var/run/dpdk/spdk_pid608506 00:10:30.395 Removing: /var/run/dpdk/spdk_pid608583 00:10:30.395 Removing: /var/run/dpdk/spdk_pid608983 00:10:30.395 Removing: /var/run/dpdk/spdk_pid609197 00:10:30.395 Removing: /var/run/dpdk/spdk_pid609648 00:10:30.395 Removing: /var/run/dpdk/spdk_pid609671 00:10:30.395 Removing: /var/run/dpdk/spdk_pid610649 00:10:30.395 Removing: /var/run/dpdk/spdk_pid611054 00:10:30.395 Removing: /var/run/dpdk/spdk_pid611300 00:10:30.395 Removing: /var/run/dpdk/spdk_pid611571 00:10:30.395 Removing: /var/run/dpdk/spdk_pid612032 00:10:30.395 Removing: /var/run/dpdk/spdk_pid612136 00:10:30.395 Removing: /var/run/dpdk/spdk_pid612420 00:10:30.395 Removing: /var/run/dpdk/spdk_pid612794 00:10:30.395 Removing: /var/run/dpdk/spdk_pid613180 00:10:30.395 Removing: /var/run/dpdk/spdk_pid613472 00:10:30.395 Removing: /var/run/dpdk/spdk_pid613913 00:10:30.395 Removing: /var/run/dpdk/spdk_pid614156 00:10:30.395 Removing: /var/run/dpdk/spdk_pid614704 00:10:30.395 Removing: /var/run/dpdk/spdk_pid614964 00:10:30.395 Removing: /var/run/dpdk/spdk_pid615402 00:10:30.395 Removing: /var/run/dpdk/spdk_pid615721 00:10:30.395 Removing: /var/run/dpdk/spdk_pid616285 00:10:30.395 Removing: /var/run/dpdk/spdk_pid617039 00:10:30.395 Removing: /var/run/dpdk/spdk_pid617279 00:10:30.395 Removing: /var/run/dpdk/spdk_pid617720 00:10:30.395 Removing: /var/run/dpdk/spdk_pid617961 00:10:30.395 Removing: /var/run/dpdk/spdk_pid618400 00:10:30.395 Removing: /var/run/dpdk/spdk_pid618690 00:10:30.395 Removing: /var/run/dpdk/spdk_pid619085 00:10:30.395 Removing: /var/run/dpdk/spdk_pid619529 00:10:30.395 Removing: /var/run/dpdk/spdk_pid619768 00:10:30.395 Removing: /var/run/dpdk/spdk_pid620216 00:10:30.395 Removing: /var/run/dpdk/spdk_pid620488 00:10:30.395 Removing: /var/run/dpdk/spdk_pid620994 00:10:30.395 Removing: /var/run/dpdk/spdk_pid621829 00:10:30.395 Removing: /var/run/dpdk/spdk_pid622287 00:10:30.395 Removing: /var/run/dpdk/spdk_pid622942 00:10:30.395 Removing: /var/run/dpdk/spdk_pid623398 00:10:30.395 Removing: /var/run/dpdk/spdk_pid623869 00:10:30.395 Removing: /var/run/dpdk/spdk_pid624514 00:10:30.395 Removing: /var/run/dpdk/spdk_pid624975 00:10:30.395 Removing: /var/run/dpdk/spdk_pid625529 00:10:30.395 Removing: /var/run/dpdk/spdk_pid626084 00:10:30.395 Removing: /var/run/dpdk/spdk_pid626541 00:10:30.395 Removing: /var/run/dpdk/spdk_pid627158 00:10:30.395 Removing: /var/run/dpdk/spdk_pid627650 00:10:30.395 Removing: /var/run/dpdk/spdk_pid628109 00:10:30.395 Removing: /var/run/dpdk/spdk_pid628762 00:10:30.395 Removing: /var/run/dpdk/spdk_pid629220 00:10:30.395 Removing: /var/run/dpdk/spdk_pid629697 00:10:30.395 Removing: /var/run/dpdk/spdk_pid630329 00:10:30.395 Removing: /var/run/dpdk/spdk_pid630789 00:10:30.395 Removing: /var/run/dpdk/spdk_pid631333 00:10:30.395 Removing: /var/run/dpdk/spdk_pid631894 00:10:30.395 Removing: /var/run/dpdk/spdk_pid632357 00:10:30.395 Removing: /var/run/dpdk/spdk_pid632910 00:10:30.395 Removing: /var/run/dpdk/spdk_pid633465 00:10:30.395 Removing: /var/run/dpdk/spdk_pid633934 00:10:30.395 Removing: /var/run/dpdk/spdk_pid634507 00:10:30.395 Removing: /var/run/dpdk/spdk_pid635109 00:10:30.395 Removing: /var/run/dpdk/spdk_pid635765 00:10:30.395 Removing: /var/run/dpdk/spdk_pid636433 00:10:30.395 Removing: /var/run/dpdk/spdk_pid637088 00:10:30.395 Removing: /var/run/dpdk/spdk_pid637741 00:10:30.395 Removing: /var/run/dpdk/spdk_pid638396 00:10:30.395 Removing: /var/run/dpdk/spdk_pid639052 00:10:30.395 Clean 00:10:30.395 08:23:42 -- common/autotest_common.sh@1451 -- # return 0 00:10:30.395 08:23:42 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:10:30.395 08:23:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.395 08:23:42 -- common/autotest_common.sh@10 -- # set +x 00:10:30.395 08:23:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:10:30.395 08:23:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:30.395 08:23:42 -- common/autotest_common.sh@10 -- # set +x 00:10:30.395 08:23:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:30.395 08:23:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:10:30.395 08:23:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:10:30.395 08:23:42 -- spdk/autotest.sh@391 -- # hash lcov 00:10:30.395 08:23:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:10:30.395 08:23:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:30.395 08:23:42 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:30.395 08:23:42 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.395 08:23:42 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.395 08:23:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.395 08:23:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.395 08:23:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.395 08:23:42 -- paths/export.sh@5 -- $ export PATH 00:10:30.395 08:23:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.395 08:23:42 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:30.395 08:23:42 -- common/autobuild_common.sh@447 -- $ date +%s 00:10:30.395 08:23:42 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721715822.XXXXXX 00:10:30.395 08:23:42 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721715822.Tq4zOZ 00:10:30.395 08:23:42 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:10:30.655 08:23:42 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:10:30.655 08:23:42 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:10:30.655 08:23:42 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:30.655 08:23:42 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:30.655 08:23:42 -- common/autobuild_common.sh@463 -- $ get_config_params 00:10:30.655 08:23:42 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:10:30.655 08:23:42 -- common/autotest_common.sh@10 -- $ set +x 00:10:30.655 08:23:42 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user' 00:10:30.655 08:23:42 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:10:30.655 08:23:42 -- pm/common@17 -- $ local monitor 00:10:30.655 08:23:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:30.655 08:23:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:30.655 08:23:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:30.655 08:23:42 -- pm/common@21 -- $ date +%s 00:10:30.655 08:23:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:30.655 08:23:42 -- pm/common@21 -- $ date +%s 00:10:30.655 08:23:42 -- pm/common@25 -- $ sleep 1 00:10:30.655 08:23:42 -- pm/common@21 -- $ date +%s 00:10:30.655 08:23:42 -- pm/common@21 -- $ date +%s 00:10:30.655 08:23:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721715822 00:10:30.656 08:23:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721715822 00:10:30.656 08:23:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721715822 00:10:30.656 08:23:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721715822 00:10:30.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721715822_collect-vmstat.pm.log 00:10:30.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721715822_collect-cpu-load.pm.log 00:10:30.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721715822_collect-cpu-temp.pm.log 00:10:30.656 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721715822_collect-bmc-pm.bmc.pm.log 00:10:31.594 08:23:43 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:10:31.594 08:23:43 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:10:31.594 08:23:43 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:31.594 08:23:43 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:10:31.594 08:23:43 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:10:31.594 08:23:43 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:10:31.594 08:23:43 -- spdk/autopackage.sh@19 -- $ timing_finish 00:10:31.594 08:23:43 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:31.594 08:23:43 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:10:31.594 08:23:43 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:31.594 08:23:43 -- spdk/autopackage.sh@20 -- $ exit 0 00:10:31.594 08:23:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:10:31.594 08:23:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:31.594 08:23:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:31.594 08:23:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:31.594 08:23:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:31.594 08:23:43 -- pm/common@44 -- $ pid=646042 00:10:31.594 08:23:43 -- pm/common@50 -- $ kill -TERM 646042 00:10:31.594 08:23:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:31.594 08:23:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:31.594 08:23:43 -- pm/common@44 -- $ pid=646045 00:10:31.594 08:23:43 -- pm/common@50 -- $ kill -TERM 646045 00:10:31.594 08:23:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:31.594 08:23:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:31.594 08:23:43 -- pm/common@44 -- $ pid=646047 00:10:31.594 08:23:43 -- pm/common@50 -- $ kill -TERM 646047 00:10:31.594 08:23:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:31.594 08:23:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:31.594 08:23:43 -- pm/common@44 -- $ pid=646081 00:10:31.594 08:23:43 -- pm/common@50 -- $ sudo -E kill -TERM 646081 00:10:31.594 + [[ -n 471914 ]] 00:10:31.594 + sudo kill 471914 00:10:31.604 [Pipeline] } 00:10:31.625 [Pipeline] // stage 00:10:31.631 [Pipeline] } 00:10:31.652 [Pipeline] // timeout 00:10:31.658 [Pipeline] } 00:10:31.676 [Pipeline] // catchError 00:10:31.683 [Pipeline] } 00:10:31.703 [Pipeline] // wrap 00:10:31.709 [Pipeline] } 00:10:31.744 [Pipeline] // catchError 00:10:31.753 [Pipeline] stage 00:10:31.756 [Pipeline] { (Epilogue) 00:10:31.770 [Pipeline] catchError 00:10:31.772 [Pipeline] { 00:10:31.787 [Pipeline] echo 00:10:31.789 Cleanup processes 00:10:31.795 [Pipeline] sh 00:10:32.079 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:32.079 532422 sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715329 00:10:32.079 532456 bash /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721715329 00:10:32.079 645893 /usr/bin/ipmitool -c -S /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache sdr get PS1 Input Power 00:10:32.079 646318 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:10:32.079 647056 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:32.093 [Pipeline] sh 00:10:32.379 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:32.380 ++ grep -v 'sudo pgrep' 00:10:32.380 ++ awk '{print $1}' 00:10:32.380 + sudo kill -9 532422 532456 646318 00:10:32.392 [Pipeline] sh 00:10:32.675 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:34.065 [Pipeline] sh 00:10:34.350 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:34.350 Artifacts sizes are good 00:10:34.366 [Pipeline] archiveArtifacts 00:10:34.373 Archiving artifacts 00:10:34.458 [Pipeline] sh 00:10:34.746 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:34.761 [Pipeline] cleanWs 00:10:34.772 [WS-CLEANUP] Deleting project workspace... 00:10:34.772 [WS-CLEANUP] Deferred wipeout is used... 00:10:34.780 [WS-CLEANUP] done 00:10:34.782 [Pipeline] } 00:10:34.801 [Pipeline] // catchError 00:10:34.814 [Pipeline] sh 00:10:35.103 + logger -p user.info -t JENKINS-CI 00:10:35.126 [Pipeline] } 00:10:35.143 [Pipeline] // stage 00:10:35.148 [Pipeline] } 00:10:35.167 [Pipeline] // node 00:10:35.174 [Pipeline] End of Pipeline 00:10:35.219 Finished: SUCCESS