00:00:00.001 Started by upstream project "autotest-nightly" build number 4134 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3496 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.269 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.269 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.095 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.107 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.120 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:05.120 > git config core.sparsecheckout # timeout=10 00:00:05.132 > git read-tree -mu HEAD # timeout=10 00:00:05.150 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:05.171 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:05.171 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:05.274 [Pipeline] Start of Pipeline 00:00:05.287 [Pipeline] library 00:00:05.289 Loading library shm_lib@master 00:00:05.289 Library shm_lib@master is cached. Copying from home. 00:00:05.306 [Pipeline] node 00:00:05.320 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.322 [Pipeline] { 00:00:05.330 [Pipeline] catchError 00:00:05.331 [Pipeline] { 00:00:05.343 [Pipeline] wrap 00:00:05.351 [Pipeline] { 00:00:05.358 [Pipeline] stage 00:00:05.359 [Pipeline] { (Prologue) 00:00:05.533 [Pipeline] sh 00:00:05.818 + logger -p user.info -t JENKINS-CI 00:00:05.834 [Pipeline] echo 00:00:05.835 Node: WFP20 00:00:05.841 [Pipeline] sh 00:00:06.140 [Pipeline] setCustomBuildProperty 00:00:06.148 [Pipeline] echo 00:00:06.149 Cleanup processes 00:00:06.153 [Pipeline] sh 00:00:06.436 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.436 917029 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.449 [Pipeline] sh 00:00:06.733 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.733 ++ awk '{print $1}' 00:00:06.733 ++ grep -v 'sudo pgrep' 00:00:06.733 + sudo kill -9 00:00:06.733 + true 00:00:06.748 [Pipeline] cleanWs 00:00:06.759 [WS-CLEANUP] Deleting project workspace... 00:00:06.759 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.765 [WS-CLEANUP] done 00:00:06.770 [Pipeline] setCustomBuildProperty 00:00:06.786 [Pipeline] sh 00:00:07.090 + sudo git config --global --replace-all safe.directory '*' 00:00:07.171 [Pipeline] httpRequest 00:00:08.166 [Pipeline] echo 00:00:08.167 Sorcerer 10.211.164.101 is alive 00:00:08.176 [Pipeline] retry 00:00:08.178 [Pipeline] { 00:00:08.188 [Pipeline] httpRequest 00:00:08.192 HttpMethod: GET 00:00:08.193 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.193 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.211 Response Code: HTTP/1.1 200 OK 00:00:08.211 Success: Status code 200 is in the accepted range: 200,404 00:00:08.211 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:14.496 [Pipeline] } 00:00:14.513 [Pipeline] // retry 00:00:14.521 [Pipeline] sh 00:00:14.806 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:14.821 [Pipeline] httpRequest 00:00:15.255 [Pipeline] echo 00:00:15.257 Sorcerer 10.211.164.101 is alive 00:00:15.266 [Pipeline] retry 00:00:15.268 [Pipeline] { 00:00:15.282 [Pipeline] httpRequest 00:00:15.287 HttpMethod: GET 00:00:15.287 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:15.288 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:15.289 Response Code: HTTP/1.1 200 OK 00:00:15.290 Success: Status code 200 is in the accepted range: 200,404 00:00:15.290 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:34.501 [Pipeline] } 00:00:34.521 [Pipeline] // retry 00:00:34.528 [Pipeline] sh 00:00:34.817 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:37.363 [Pipeline] sh 00:00:37.649 + git -C spdk log --oneline -n5 00:00:37.649 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:37.649 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:37.649 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:37.649 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:37.649 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:37.660 [Pipeline] } 00:00:37.675 [Pipeline] // stage 00:00:37.685 [Pipeline] stage 00:00:37.687 [Pipeline] { (Prepare) 00:00:37.704 [Pipeline] writeFile 00:00:37.721 [Pipeline] sh 00:00:38.005 + logger -p user.info -t JENKINS-CI 00:00:38.017 [Pipeline] sh 00:00:38.302 + logger -p user.info -t JENKINS-CI 00:00:38.315 [Pipeline] sh 00:00:38.601 + cat autorun-spdk.conf 00:00:38.601 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.601 SPDK_TEST_FUZZER_SHORT=1 00:00:38.601 SPDK_TEST_FUZZER=1 00:00:38.601 SPDK_TEST_SETUP=1 00:00:38.601 SPDK_RUN_UBSAN=1 00:00:38.609 RUN_NIGHTLY=1 00:00:38.614 [Pipeline] readFile 00:00:38.641 [Pipeline] withEnv 00:00:38.643 [Pipeline] { 00:00:38.656 [Pipeline] sh 00:00:38.941 + set -ex 00:00:38.941 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:00:38.941 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:38.941 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.941 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:38.941 ++ SPDK_TEST_FUZZER=1 00:00:38.941 ++ SPDK_TEST_SETUP=1 00:00:38.941 ++ SPDK_RUN_UBSAN=1 00:00:38.941 ++ RUN_NIGHTLY=1 00:00:38.941 + case $SPDK_TEST_NVMF_NICS in 00:00:38.941 + DRIVERS= 00:00:38.941 + [[ -n '' ]] 00:00:38.941 + exit 0 00:00:38.950 [Pipeline] } 00:00:38.964 [Pipeline] // withEnv 00:00:38.970 [Pipeline] } 00:00:38.983 [Pipeline] // stage 00:00:38.994 [Pipeline] catchError 00:00:38.995 [Pipeline] { 00:00:39.009 [Pipeline] timeout 00:00:39.009 Timeout set to expire in 30 min 00:00:39.011 [Pipeline] { 00:00:39.023 [Pipeline] stage 00:00:39.025 [Pipeline] { (Tests) 00:00:39.038 [Pipeline] sh 00:00:39.324 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.324 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.324 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.324 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:00:39.324 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:39.324 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:39.324 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:00:39.324 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:39.324 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:00:39.324 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:00:39.324 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:00:39.324 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:39.324 + source /etc/os-release 00:00:39.324 ++ NAME='Fedora Linux' 00:00:39.324 ++ VERSION='39 (Cloud Edition)' 00:00:39.324 ++ ID=fedora 00:00:39.324 ++ VERSION_ID=39 00:00:39.324 ++ VERSION_CODENAME= 00:00:39.324 ++ PLATFORM_ID=platform:f39 00:00:39.324 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:39.324 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.324 ++ LOGO=fedora-logo-icon 00:00:39.324 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:39.324 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.324 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:39.324 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.324 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.324 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.324 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:39.324 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.324 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:39.324 ++ SUPPORT_END=2024-11-12 00:00:39.324 ++ VARIANT='Cloud Edition' 00:00:39.324 ++ VARIANT_ID=cloud 00:00:39.324 + uname -a 00:00:39.324 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:39.324 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:00:41.858 Hugepages 00:00:41.859 node hugesize free / total 00:00:41.859 node0 1048576kB 0 / 0 00:00:41.859 node0 2048kB 0 / 0 00:00:41.859 node1 1048576kB 0 / 0 00:00:41.859 node1 2048kB 0 / 0 00:00:41.859 00:00:41.859 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:41.859 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:41.859 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:41.859 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:41.859 + rm -f /tmp/spdk-ld-path 00:00:41.859 + source autorun-spdk.conf 00:00:41.859 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.859 ++ SPDK_TEST_FUZZER_SHORT=1 00:00:41.859 ++ SPDK_TEST_FUZZER=1 00:00:41.859 ++ SPDK_TEST_SETUP=1 00:00:41.859 ++ SPDK_RUN_UBSAN=1 00:00:41.859 ++ RUN_NIGHTLY=1 00:00:41.859 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:41.859 + [[ -n '' ]] 00:00:41.859 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:41.859 + for M in /var/spdk/build-*-manifest.txt 00:00:41.859 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:41.859 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:41.859 + for M in /var/spdk/build-*-manifest.txt 00:00:41.859 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:41.859 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:41.859 + for M in /var/spdk/build-*-manifest.txt 00:00:41.859 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:41.859 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:00:41.859 ++ uname 00:00:41.859 + [[ Linux == \L\i\n\u\x ]] 00:00:41.859 + sudo dmesg -T 00:00:42.117 + sudo dmesg --clear 00:00:42.117 + dmesg_pid=917943 00:00:42.117 + [[ Fedora Linux == FreeBSD ]] 00:00:42.117 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.117 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.117 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.117 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.117 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.117 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.117 + sudo dmesg -Tw 00:00:42.118 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.118 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.118 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.118 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.118 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.118 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.118 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.118 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.118 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:00:42.118 Test configuration: 00:00:42.118 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.118 SPDK_TEST_FUZZER_SHORT=1 00:00:42.118 SPDK_TEST_FUZZER=1 00:00:42.118 SPDK_TEST_SETUP=1 00:00:42.118 SPDK_RUN_UBSAN=1 00:00:42.118 RUN_NIGHTLY=1 21:44:26 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:00:42.118 21:44:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:00:42.118 21:44:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:42.118 21:44:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.118 21:44:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.118 21:44:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.118 21:44:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.118 21:44:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.118 21:44:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.118 21:44:26 -- paths/export.sh@5 -- $ export PATH 00:00:42.118 21:44:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.118 21:44:26 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:00:42.118 21:44:26 -- common/autobuild_common.sh@479 -- $ date +%s 00:00:42.118 21:44:26 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727725466.XXXXXX 00:00:42.118 21:44:26 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727725466.gZCXC9 00:00:42.118 21:44:26 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:00:42.118 21:44:26 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:00:42.118 21:44:26 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:00:42.118 21:44:26 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.118 21:44:26 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.118 21:44:26 -- common/autobuild_common.sh@495 -- $ get_config_params 00:00:42.118 21:44:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:42.118 21:44:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.118 21:44:26 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.118 21:44:26 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:00:42.118 21:44:26 -- pm/common@17 -- $ local monitor 00:00:42.118 21:44:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.118 21:44:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.118 21:44:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.118 21:44:26 -- pm/common@21 -- $ date +%s 00:00:42.118 21:44:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.118 21:44:26 -- pm/common@21 -- $ date +%s 00:00:42.118 21:44:26 -- pm/common@21 -- $ date +%s 00:00:42.118 21:44:26 -- pm/common@25 -- $ sleep 1 00:00:42.118 21:44:26 -- pm/common@21 -- $ date +%s 00:00:42.118 21:44:26 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727725466 00:00:42.118 21:44:26 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727725466 00:00:42.118 21:44:26 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727725466 00:00:42.118 21:44:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1727725466 00:00:42.376 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727725466_collect-cpu-temp.pm.log 00:00:42.376 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727725466_collect-cpu-load.pm.log 00:00:42.376 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727725466_collect-vmstat.pm.log 00:00:42.376 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1727725466_collect-bmc-pm.bmc.pm.log 00:00:43.341 21:44:27 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:00:43.341 21:44:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.341 21:44:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.341 21:44:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:43.341 21:44:27 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.341 Mon Sep 30 07:44:27 PM UTC 2024 00:00:43.341 21:44:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.341 v25.01-pre-17-g09cc66129 00:00:43.341 21:44:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.341 21:44:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.341 21:44:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.341 21:44:27 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:43.341 21:44:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:43.341 21:44:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.341 ************************************ 00:00:43.341 START TEST ubsan 00:00:43.341 ************************************ 00:00:43.341 21:44:27 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:43.341 using ubsan 00:00:43.341 00:00:43.341 real 0m0.001s 00:00:43.341 user 0m0.000s 00:00:43.341 sys 0m0.000s 00:00:43.341 21:44:27 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:43.341 21:44:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.341 ************************************ 00:00:43.341 END TEST ubsan 00:00:43.341 ************************************ 00:00:43.341 21:44:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:43.341 21:44:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:43.341 21:44:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:43.341 21:44:27 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:00:43.342 21:44:27 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:00:43.342 21:44:27 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:00:43.342 21:44:27 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:00:43.342 21:44:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:43.342 21:44:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.342 ************************************ 00:00:43.342 START TEST autobuild_llvm_precompile 00:00:43.342 ************************************ 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:00:43.342 Target: x86_64-redhat-linux-gnu 00:00:43.342 Thread model: posix 00:00:43.342 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:00:43.342 21:44:27 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:00:43.600 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:00:43.600 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:00:44.165 Using 'verbs' RDMA provider 00:00:59.992 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:12.191 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:12.191 Creating mk/config.mk...done. 00:01:12.191 Creating mk/cc.flags.mk...done. 00:01:12.191 Type 'make' to build. 00:01:12.191 00:01:12.191 real 0m28.774s 00:01:12.191 user 0m12.480s 00:01:12.191 sys 0m15.594s 00:01:12.191 21:44:56 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:12.191 21:44:56 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:01:12.191 ************************************ 00:01:12.191 END TEST autobuild_llvm_precompile 00:01:12.191 ************************************ 00:01:12.191 21:44:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.191 21:44:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.191 21:44:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.191 21:44:56 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:12.191 21:44:56 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:12.450 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:12.450 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:12.709 Using 'verbs' RDMA provider 00:01:25.854 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.059 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.059 Creating mk/config.mk...done. 00:01:38.059 Creating mk/cc.flags.mk...done. 00:01:38.059 Type 'make' to build. 00:01:38.059 21:45:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:38.059 21:45:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:38.059 21:45:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:38.059 21:45:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.059 ************************************ 00:01:38.059 START TEST make 00:01:38.059 ************************************ 00:01:38.059 21:45:21 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:38.059 make[1]: Nothing to be done for 'all'. 00:01:39.440 The Meson build system 00:01:39.440 Version: 1.5.0 00:01:39.440 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:01:39.440 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.440 Build type: native build 00:01:39.440 Project name: libvfio-user 00:01:39.440 Project version: 0.0.1 00:01:39.440 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:39.440 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:39.440 Host machine cpu family: x86_64 00:01:39.440 Host machine cpu: x86_64 00:01:39.440 Run-time dependency threads found: YES 00:01:39.440 Library dl found: YES 00:01:39.440 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:39.440 Run-time dependency json-c found: YES 0.17 00:01:39.440 Run-time dependency cmocka found: YES 1.1.7 00:01:39.440 Program pytest-3 found: NO 00:01:39.440 Program flake8 found: NO 00:01:39.440 Program misspell-fixer found: NO 00:01:39.440 Program restructuredtext-lint found: NO 00:01:39.440 Program valgrind found: YES (/usr/bin/valgrind) 00:01:39.440 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.440 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.440 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.440 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.440 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:39.440 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:39.440 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.440 Build targets in project: 8 00:01:39.440 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:39.440 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:39.440 00:01:39.440 libvfio-user 0.0.1 00:01:39.440 00:01:39.440 User defined options 00:01:39.440 buildtype : debug 00:01:39.440 default_library: static 00:01:39.440 libdir : /usr/local/lib 00:01:39.440 00:01:39.440 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.701 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:39.701 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:01:39.701 [2/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:39.701 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:01:39.701 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:39.701 [5/36] Compiling C object samples/null.p/null.c.o 00:01:39.701 [6/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:01:39.701 [7/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:01:39.701 [8/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:39.701 [9/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:39.701 [10/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:39.701 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:39.701 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:39.701 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:39.701 [14/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:39.701 [15/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:39.701 [16/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:01:39.701 [17/36] Compiling C object test/unit_tests.p/mocks.c.o 00:01:39.701 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:01:39.701 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:01:39.701 [20/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:01:39.701 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:39.701 [22/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:39.701 [23/36] Compiling C object samples/server.p/server.c.o 00:01:39.701 [24/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:39.701 [25/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:39.701 [26/36] Compiling C object samples/client.p/client.c.o 00:01:39.701 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:39.701 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:01:39.962 [29/36] Linking static target lib/libvfio-user.a 00:01:39.962 [30/36] Linking target samples/client 00:01:39.962 [31/36] Linking target samples/gpio-pci-idio-16 00:01:39.962 [32/36] Linking target samples/server 00:01:39.962 [33/36] Linking target test/unit_tests 00:01:39.962 [34/36] Linking target samples/shadow_ioeventfd_server 00:01:39.962 [35/36] Linking target samples/null 00:01:39.962 [36/36] Linking target samples/lspci 00:01:39.962 INFO: autodetecting backend as ninja 00:01:39.962 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.962 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.221 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.221 ninja: no work to do. 00:01:46.805 The Meson build system 00:01:46.805 Version: 1.5.0 00:01:46.805 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:01:46.805 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:01:46.805 Build type: native build 00:01:46.805 Program cat found: YES (/usr/bin/cat) 00:01:46.805 Project name: DPDK 00:01:46.805 Project version: 24.03.0 00:01:46.805 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:01:46.805 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:01:46.805 Host machine cpu family: x86_64 00:01:46.805 Host machine cpu: x86_64 00:01:46.805 Message: ## Building in Developer Mode ## 00:01:46.805 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.805 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.805 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.805 Program python3 found: YES (/usr/bin/python3) 00:01:46.805 Program cat found: YES (/usr/bin/cat) 00:01:46.805 Compiler for C supports arguments -march=native: YES 00:01:46.805 Checking for size of "void *" : 8 00:01:46.805 Checking for size of "void *" : 8 (cached) 00:01:46.805 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:46.805 Library m found: YES 00:01:46.805 Library numa found: YES 00:01:46.805 Has header "numaif.h" : YES 00:01:46.805 Library fdt found: NO 00:01:46.805 Library execinfo found: NO 00:01:46.805 Has header "execinfo.h" : YES 00:01:46.805 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.805 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.805 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.805 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.805 Run-time dependency openssl found: YES 3.1.1 00:01:46.805 Run-time dependency libpcap found: YES 1.10.4 00:01:46.805 Has header "pcap.h" with dependency libpcap: YES 00:01:46.805 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.805 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.805 Compiler for C supports arguments -Wformat: YES 00:01:46.805 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:46.805 Compiler for C supports arguments -Wformat-security: YES 00:01:46.805 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.805 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.805 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.805 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.805 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.805 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.805 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.805 Compiler for C supports arguments -Wundef: YES 00:01:46.805 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.805 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.805 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:01:46.805 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.805 Program objdump found: YES (/usr/bin/objdump) 00:01:46.805 Compiler for C supports arguments -mavx512f: YES 00:01:46.805 Checking if "AVX512 checking" compiles: YES 00:01:46.805 Fetching value of define "__SSE4_2__" : 1 00:01:46.805 Fetching value of define "__AES__" : 1 00:01:46.805 Fetching value of define "__AVX__" : 1 00:01:46.805 Fetching value of define "__AVX2__" : 1 00:01:46.805 Fetching value of define "__AVX512BW__" : 1 00:01:46.805 Fetching value of define "__AVX512CD__" : 1 00:01:46.805 Fetching value of define "__AVX512DQ__" : 1 00:01:46.805 Fetching value of define "__AVX512F__" : 1 00:01:46.805 Fetching value of define "__AVX512VL__" : 1 00:01:46.805 Fetching value of define "__PCLMUL__" : 1 00:01:46.805 Fetching value of define "__RDRND__" : 1 00:01:46.805 Fetching value of define "__RDSEED__" : 1 00:01:46.805 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.805 Fetching value of define "__znver1__" : (undefined) 00:01:46.805 Fetching value of define "__znver2__" : (undefined) 00:01:46.805 Fetching value of define "__znver3__" : (undefined) 00:01:46.805 Fetching value of define "__znver4__" : (undefined) 00:01:46.805 Compiler for C supports arguments -Wno-format-truncation: NO 00:01:46.805 Message: lib/log: Defining dependency "log" 00:01:46.805 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.805 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.805 Checking for function "getentropy" : NO 00:01:46.805 Message: lib/eal: Defining dependency "eal" 00:01:46.805 Message: lib/ring: Defining dependency "ring" 00:01:46.805 Message: lib/rcu: Defining dependency "rcu" 00:01:46.805 Message: lib/mempool: Defining dependency "mempool" 00:01:46.805 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.805 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.805 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.805 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.805 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.805 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.805 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:46.805 Compiler for C supports arguments -mpclmul: YES 00:01:46.805 Compiler for C supports arguments -maes: YES 00:01:46.805 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.805 Compiler for C supports arguments -mavx512bw: YES 00:01:46.805 Compiler for C supports arguments -mavx512dq: YES 00:01:46.805 Compiler for C supports arguments -mavx512vl: YES 00:01:46.805 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.805 Compiler for C supports arguments -mavx2: YES 00:01:46.805 Compiler for C supports arguments -mavx: YES 00:01:46.805 Message: lib/net: Defining dependency "net" 00:01:46.805 Message: lib/meter: Defining dependency "meter" 00:01:46.805 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.805 Message: lib/pci: Defining dependency "pci" 00:01:46.805 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.805 Message: lib/hash: Defining dependency "hash" 00:01:46.805 Message: lib/timer: Defining dependency "timer" 00:01:46.805 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.805 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.805 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.805 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.805 Message: lib/power: Defining dependency "power" 00:01:46.806 Message: lib/reorder: Defining dependency "reorder" 00:01:46.806 Message: lib/security: Defining dependency "security" 00:01:46.806 Has header "linux/userfaultfd.h" : YES 00:01:46.806 Has header "linux/vduse.h" : YES 00:01:46.806 Message: lib/vhost: Defining dependency "vhost" 00:01:46.806 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:01:46.806 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.806 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.806 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.806 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.806 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.806 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.806 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.806 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.806 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.806 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:46.806 Configuring doxy-api-html.conf using configuration 00:01:46.806 Configuring doxy-api-man.conf using configuration 00:01:46.806 Program mandb found: YES (/usr/bin/mandb) 00:01:46.806 Program sphinx-build found: NO 00:01:46.806 Configuring rte_build_config.h using configuration 00:01:46.806 Message: 00:01:46.806 ================= 00:01:46.806 Applications Enabled 00:01:46.806 ================= 00:01:46.806 00:01:46.806 apps: 00:01:46.806 00:01:46.806 00:01:46.806 Message: 00:01:46.806 ================= 00:01:46.806 Libraries Enabled 00:01:46.806 ================= 00:01:46.806 00:01:46.806 libs: 00:01:46.806 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.806 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.806 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.806 00:01:46.806 Message: 00:01:46.806 =============== 00:01:46.806 Drivers Enabled 00:01:46.806 =============== 00:01:46.806 00:01:46.806 common: 00:01:46.806 00:01:46.806 bus: 00:01:46.806 pci, vdev, 00:01:46.806 mempool: 00:01:46.806 ring, 00:01:46.806 dma: 00:01:46.806 00:01:46.806 net: 00:01:46.806 00:01:46.806 crypto: 00:01:46.806 00:01:46.806 compress: 00:01:46.806 00:01:46.806 vdpa: 00:01:46.806 00:01:46.806 00:01:46.806 Message: 00:01:46.806 ================= 00:01:46.806 Content Skipped 00:01:46.806 ================= 00:01:46.806 00:01:46.806 apps: 00:01:46.806 dumpcap: explicitly disabled via build config 00:01:46.806 graph: explicitly disabled via build config 00:01:46.806 pdump: explicitly disabled via build config 00:01:46.806 proc-info: explicitly disabled via build config 00:01:46.806 test-acl: explicitly disabled via build config 00:01:46.806 test-bbdev: explicitly disabled via build config 00:01:46.806 test-cmdline: explicitly disabled via build config 00:01:46.806 test-compress-perf: explicitly disabled via build config 00:01:46.806 test-crypto-perf: explicitly disabled via build config 00:01:46.806 test-dma-perf: explicitly disabled via build config 00:01:46.806 test-eventdev: explicitly disabled via build config 00:01:46.806 test-fib: explicitly disabled via build config 00:01:46.806 test-flow-perf: explicitly disabled via build config 00:01:46.806 test-gpudev: explicitly disabled via build config 00:01:46.806 test-mldev: explicitly disabled via build config 00:01:46.806 test-pipeline: explicitly disabled via build config 00:01:46.806 test-pmd: explicitly disabled via build config 00:01:46.806 test-regex: explicitly disabled via build config 00:01:46.806 test-sad: explicitly disabled via build config 00:01:46.806 test-security-perf: explicitly disabled via build config 00:01:46.806 00:01:46.806 libs: 00:01:46.806 argparse: explicitly disabled via build config 00:01:46.806 metrics: explicitly disabled via build config 00:01:46.806 acl: explicitly disabled via build config 00:01:46.806 bbdev: explicitly disabled via build config 00:01:46.806 bitratestats: explicitly disabled via build config 00:01:46.806 bpf: explicitly disabled via build config 00:01:46.806 cfgfile: explicitly disabled via build config 00:01:46.806 distributor: explicitly disabled via build config 00:01:46.806 efd: explicitly disabled via build config 00:01:46.806 eventdev: explicitly disabled via build config 00:01:46.806 dispatcher: explicitly disabled via build config 00:01:46.806 gpudev: explicitly disabled via build config 00:01:46.806 gro: explicitly disabled via build config 00:01:46.806 gso: explicitly disabled via build config 00:01:46.806 ip_frag: explicitly disabled via build config 00:01:46.806 jobstats: explicitly disabled via build config 00:01:46.806 latencystats: explicitly disabled via build config 00:01:46.806 lpm: explicitly disabled via build config 00:01:46.806 member: explicitly disabled via build config 00:01:46.806 pcapng: explicitly disabled via build config 00:01:46.806 rawdev: explicitly disabled via build config 00:01:46.806 regexdev: explicitly disabled via build config 00:01:46.806 mldev: explicitly disabled via build config 00:01:46.806 rib: explicitly disabled via build config 00:01:46.806 sched: explicitly disabled via build config 00:01:46.806 stack: explicitly disabled via build config 00:01:46.806 ipsec: explicitly disabled via build config 00:01:46.806 pdcp: explicitly disabled via build config 00:01:46.806 fib: explicitly disabled via build config 00:01:46.806 port: explicitly disabled via build config 00:01:46.806 pdump: explicitly disabled via build config 00:01:46.806 table: explicitly disabled via build config 00:01:46.806 pipeline: explicitly disabled via build config 00:01:46.806 graph: explicitly disabled via build config 00:01:46.806 node: explicitly disabled via build config 00:01:46.806 00:01:46.806 drivers: 00:01:46.806 common/cpt: not in enabled drivers build config 00:01:46.806 common/dpaax: not in enabled drivers build config 00:01:46.806 common/iavf: not in enabled drivers build config 00:01:46.806 common/idpf: not in enabled drivers build config 00:01:46.806 common/ionic: not in enabled drivers build config 00:01:46.806 common/mvep: not in enabled drivers build config 00:01:46.806 common/octeontx: not in enabled drivers build config 00:01:46.806 bus/auxiliary: not in enabled drivers build config 00:01:46.806 bus/cdx: not in enabled drivers build config 00:01:46.806 bus/dpaa: not in enabled drivers build config 00:01:46.806 bus/fslmc: not in enabled drivers build config 00:01:46.806 bus/ifpga: not in enabled drivers build config 00:01:46.806 bus/platform: not in enabled drivers build config 00:01:46.806 bus/uacce: not in enabled drivers build config 00:01:46.806 bus/vmbus: not in enabled drivers build config 00:01:46.806 common/cnxk: not in enabled drivers build config 00:01:46.806 common/mlx5: not in enabled drivers build config 00:01:46.806 common/nfp: not in enabled drivers build config 00:01:46.806 common/nitrox: not in enabled drivers build config 00:01:46.806 common/qat: not in enabled drivers build config 00:01:46.806 common/sfc_efx: not in enabled drivers build config 00:01:46.806 mempool/bucket: not in enabled drivers build config 00:01:46.806 mempool/cnxk: not in enabled drivers build config 00:01:46.806 mempool/dpaa: not in enabled drivers build config 00:01:46.806 mempool/dpaa2: not in enabled drivers build config 00:01:46.806 mempool/octeontx: not in enabled drivers build config 00:01:46.806 mempool/stack: not in enabled drivers build config 00:01:46.806 dma/cnxk: not in enabled drivers build config 00:01:46.806 dma/dpaa: not in enabled drivers build config 00:01:46.806 dma/dpaa2: not in enabled drivers build config 00:01:46.806 dma/hisilicon: not in enabled drivers build config 00:01:46.806 dma/idxd: not in enabled drivers build config 00:01:46.806 dma/ioat: not in enabled drivers build config 00:01:46.806 dma/skeleton: not in enabled drivers build config 00:01:46.806 net/af_packet: not in enabled drivers build config 00:01:46.806 net/af_xdp: not in enabled drivers build config 00:01:46.806 net/ark: not in enabled drivers build config 00:01:46.806 net/atlantic: not in enabled drivers build config 00:01:46.806 net/avp: not in enabled drivers build config 00:01:46.806 net/axgbe: not in enabled drivers build config 00:01:46.806 net/bnx2x: not in enabled drivers build config 00:01:46.806 net/bnxt: not in enabled drivers build config 00:01:46.806 net/bonding: not in enabled drivers build config 00:01:46.806 net/cnxk: not in enabled drivers build config 00:01:46.806 net/cpfl: not in enabled drivers build config 00:01:46.806 net/cxgbe: not in enabled drivers build config 00:01:46.806 net/dpaa: not in enabled drivers build config 00:01:46.806 net/dpaa2: not in enabled drivers build config 00:01:46.806 net/e1000: not in enabled drivers build config 00:01:46.806 net/ena: not in enabled drivers build config 00:01:46.806 net/enetc: not in enabled drivers build config 00:01:46.806 net/enetfec: not in enabled drivers build config 00:01:46.806 net/enic: not in enabled drivers build config 00:01:46.806 net/failsafe: not in enabled drivers build config 00:01:46.806 net/fm10k: not in enabled drivers build config 00:01:46.806 net/gve: not in enabled drivers build config 00:01:46.806 net/hinic: not in enabled drivers build config 00:01:46.806 net/hns3: not in enabled drivers build config 00:01:46.806 net/i40e: not in enabled drivers build config 00:01:46.806 net/iavf: not in enabled drivers build config 00:01:46.806 net/ice: not in enabled drivers build config 00:01:46.806 net/idpf: not in enabled drivers build config 00:01:46.806 net/igc: not in enabled drivers build config 00:01:46.806 net/ionic: not in enabled drivers build config 00:01:46.806 net/ipn3ke: not in enabled drivers build config 00:01:46.806 net/ixgbe: not in enabled drivers build config 00:01:46.806 net/mana: not in enabled drivers build config 00:01:46.806 net/memif: not in enabled drivers build config 00:01:46.806 net/mlx4: not in enabled drivers build config 00:01:46.806 net/mlx5: not in enabled drivers build config 00:01:46.806 net/mvneta: not in enabled drivers build config 00:01:46.806 net/mvpp2: not in enabled drivers build config 00:01:46.806 net/netvsc: not in enabled drivers build config 00:01:46.806 net/nfb: not in enabled drivers build config 00:01:46.806 net/nfp: not in enabled drivers build config 00:01:46.806 net/ngbe: not in enabled drivers build config 00:01:46.806 net/null: not in enabled drivers build config 00:01:46.806 net/octeontx: not in enabled drivers build config 00:01:46.806 net/octeon_ep: not in enabled drivers build config 00:01:46.806 net/pcap: not in enabled drivers build config 00:01:46.807 net/pfe: not in enabled drivers build config 00:01:46.807 net/qede: not in enabled drivers build config 00:01:46.807 net/ring: not in enabled drivers build config 00:01:46.807 net/sfc: not in enabled drivers build config 00:01:46.807 net/softnic: not in enabled drivers build config 00:01:46.807 net/tap: not in enabled drivers build config 00:01:46.807 net/thunderx: not in enabled drivers build config 00:01:46.807 net/txgbe: not in enabled drivers build config 00:01:46.807 net/vdev_netvsc: not in enabled drivers build config 00:01:46.807 net/vhost: not in enabled drivers build config 00:01:46.807 net/virtio: not in enabled drivers build config 00:01:46.807 net/vmxnet3: not in enabled drivers build config 00:01:46.807 raw/*: missing internal dependency, "rawdev" 00:01:46.807 crypto/armv8: not in enabled drivers build config 00:01:46.807 crypto/bcmfs: not in enabled drivers build config 00:01:46.807 crypto/caam_jr: not in enabled drivers build config 00:01:46.807 crypto/ccp: not in enabled drivers build config 00:01:46.807 crypto/cnxk: not in enabled drivers build config 00:01:46.807 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.807 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.807 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.807 crypto/mlx5: not in enabled drivers build config 00:01:46.807 crypto/mvsam: not in enabled drivers build config 00:01:46.807 crypto/nitrox: not in enabled drivers build config 00:01:46.807 crypto/null: not in enabled drivers build config 00:01:46.807 crypto/octeontx: not in enabled drivers build config 00:01:46.807 crypto/openssl: not in enabled drivers build config 00:01:46.807 crypto/scheduler: not in enabled drivers build config 00:01:46.807 crypto/uadk: not in enabled drivers build config 00:01:46.807 crypto/virtio: not in enabled drivers build config 00:01:46.807 compress/isal: not in enabled drivers build config 00:01:46.807 compress/mlx5: not in enabled drivers build config 00:01:46.807 compress/nitrox: not in enabled drivers build config 00:01:46.807 compress/octeontx: not in enabled drivers build config 00:01:46.807 compress/zlib: not in enabled drivers build config 00:01:46.807 regex/*: missing internal dependency, "regexdev" 00:01:46.807 ml/*: missing internal dependency, "mldev" 00:01:46.807 vdpa/ifc: not in enabled drivers build config 00:01:46.807 vdpa/mlx5: not in enabled drivers build config 00:01:46.807 vdpa/nfp: not in enabled drivers build config 00:01:46.807 vdpa/sfc: not in enabled drivers build config 00:01:46.807 event/*: missing internal dependency, "eventdev" 00:01:46.807 baseband/*: missing internal dependency, "bbdev" 00:01:46.807 gpu/*: missing internal dependency, "gpudev" 00:01:46.807 00:01:46.807 00:01:46.807 Build targets in project: 85 00:01:46.807 00:01:46.807 DPDK 24.03.0 00:01:46.807 00:01:46.807 User defined options 00:01:46.807 buildtype : debug 00:01:46.807 default_library : static 00:01:46.807 libdir : lib 00:01:46.807 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:46.807 c_args : -fPIC -Werror 00:01:46.807 c_link_args : 00:01:46.807 cpu_instruction_set: native 00:01:46.807 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:46.807 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:46.807 enable_docs : false 00:01:46.807 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.807 enable_kmods : false 00:01:46.807 max_lcores : 128 00:01:46.807 tests : false 00:01:46.807 00:01:46.807 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.807 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.807 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.807 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.807 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.807 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.807 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.807 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.807 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.807 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.807 [9/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.807 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.807 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.807 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.807 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.807 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.807 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.807 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.807 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.807 [18/268] Linking static target lib/librte_kvargs.a 00:01:46.807 [19/268] Linking static target lib/librte_log.a 00:01:46.807 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.807 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.807 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.807 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.807 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.807 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.807 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.807 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.807 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.807 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.807 [30/268] Linking static target lib/librte_pci.a 00:01:46.807 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:46.807 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.807 [33/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.807 [34/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.807 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.807 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.065 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.065 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.065 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.065 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.065 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.065 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.065 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.065 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.065 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.065 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.065 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.065 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.065 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.065 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.065 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.065 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.065 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.065 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.065 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.065 [56/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.065 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.065 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.065 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.065 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.065 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.065 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.065 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.065 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.065 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.065 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.065 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.065 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.065 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.065 [70/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.065 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.065 [72/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.065 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.065 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.065 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.065 [76/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.065 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.065 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.065 [79/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.065 [80/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.065 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.065 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.065 [83/268] Linking static target lib/librte_telemetry.a 00:01:47.065 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.065 [85/268] Linking static target lib/librte_meter.a 00:01:47.065 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.065 [87/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:47.065 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.065 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.065 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.065 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.065 [92/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.065 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.065 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.065 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.065 [96/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.065 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.065 [98/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.065 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.065 [100/268] Linking static target lib/librte_ring.a 00:01:47.065 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.065 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.065 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.065 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.065 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:47.065 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.065 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.065 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.065 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.065 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.065 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.065 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.065 [113/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:47.065 [114/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.065 [115/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.065 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.065 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.065 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.065 [119/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.065 [120/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:47.065 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.065 [122/268] Linking static target lib/librte_timer.a 00:01:47.065 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.065 [124/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.065 [125/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:47.065 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.065 [127/268] Linking static target lib/librte_cmdline.a 00:01:47.065 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.065 [129/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.065 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.065 [131/268] Linking static target lib/librte_eal.a 00:01:47.065 [132/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.065 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.065 [134/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.065 [135/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:47.065 [136/268] Linking static target lib/librte_net.a 00:01:47.065 [137/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:47.065 [138/268] Linking static target lib/librte_mempool.a 00:01:47.065 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.065 [140/268] Linking static target lib/librte_rcu.a 00:01:47.323 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.323 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.323 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.323 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.323 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.323 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.323 [147/268] Linking static target lib/librte_dmadev.a 00:01:47.323 [148/268] Linking static target lib/librte_mbuf.a 00:01:47.323 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.323 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.323 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:47.323 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.323 [153/268] Linking static target lib/librte_compressdev.a 00:01:47.323 [154/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:47.323 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.323 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.323 [157/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.323 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.323 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.323 [160/268] Linking static target lib/librte_hash.a 00:01:47.323 [161/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.323 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.323 [163/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.323 [164/268] Linking target lib/librte_log.so.24.1 00:01:47.323 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.323 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.323 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.323 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.323 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.323 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.583 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.583 [172/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.583 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.583 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.583 [175/268] Linking static target lib/librte_cryptodev.a 00:01:47.583 [176/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.583 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.583 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.583 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.583 [180/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.583 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.583 [182/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.583 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.583 [184/268] Linking static target lib/librte_power.a 00:01:47.583 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.583 [186/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.583 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:47.583 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.583 [189/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:47.583 [190/268] Linking static target lib/librte_security.a 00:01:47.583 [191/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.583 [192/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.583 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.583 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.583 [195/268] Linking target lib/librte_kvargs.so.24.1 00:01:47.583 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:47.583 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.583 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.583 [199/268] Linking static target lib/librte_reorder.a 00:01:47.583 [200/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.583 [201/268] Linking target lib/librte_telemetry.so.24.1 00:01:47.583 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.583 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.583 [204/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.843 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:47.843 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.843 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.843 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:47.843 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.843 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.843 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.843 [212/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:47.843 [213/268] Linking static target lib/librte_ethdev.a 00:01:47.843 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:47.843 [215/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.843 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:47.843 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.103 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.103 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.103 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.103 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.103 [222/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.362 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.362 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.362 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:48.362 [226/268] Linking static target lib/librte_vhost.a 00:01:48.362 [227/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.621 [228/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.621 [229/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.558 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.655 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.774 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.711 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.711 [234/268] Linking target lib/librte_eal.so.24.1 00:01:59.711 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:59.970 [236/268] Linking target lib/librte_dmadev.so.24.1 00:01:59.970 [237/268] Linking target lib/librte_pci.so.24.1 00:01:59.970 [238/268] Linking target lib/librte_ring.so.24.1 00:01:59.970 [239/268] Linking target lib/librte_timer.so.24.1 00:01:59.970 [240/268] Linking target lib/librte_meter.so.24.1 00:01:59.970 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:59.970 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:59.970 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:59.970 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:59.970 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:59.970 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:59.970 [247/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:59.970 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:59.970 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:00.230 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:00.230 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.230 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:00.230 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:00.490 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:00.490 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:00.490 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:00.490 [257/268] Linking target lib/librte_net.so.24.1 00:02:00.490 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:00.490 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:00.490 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:00.750 [261/268] Linking target lib/librte_security.so.24.1 00:02:00.750 [262/268] Linking target lib/librte_hash.so.24.1 00:02:00.750 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:00.750 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:00.750 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:00.750 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.010 [267/268] Linking target lib/librte_power.so.24.1 00:02:01.010 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:01.010 INFO: autodetecting backend as ninja 00:02:01.010 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:01.948 CC lib/ut/ut.o 00:02:01.948 CC lib/ut_mock/mock.o 00:02:01.948 CC lib/log/log.o 00:02:01.948 CC lib/log/log_flags.o 00:02:01.948 CC lib/log/log_deprecated.o 00:02:01.948 LIB libspdk_ut.a 00:02:01.948 LIB libspdk_log.a 00:02:01.948 LIB libspdk_ut_mock.a 00:02:02.207 CC lib/ioat/ioat.o 00:02:02.467 CC lib/dma/dma.o 00:02:02.467 CXX lib/trace_parser/trace.o 00:02:02.467 CC lib/util/base64.o 00:02:02.467 CC lib/util/bit_array.o 00:02:02.467 CC lib/util/cpuset.o 00:02:02.467 CC lib/util/crc16.o 00:02:02.467 CC lib/util/crc32.o 00:02:02.467 CC lib/util/crc32c.o 00:02:02.467 CC lib/util/crc32_ieee.o 00:02:02.467 CC lib/util/crc64.o 00:02:02.467 CC lib/util/dif.o 00:02:02.467 CC lib/util/fd.o 00:02:02.467 CC lib/util/fd_group.o 00:02:02.467 CC lib/util/file.o 00:02:02.467 CC lib/util/hexlify.o 00:02:02.467 CC lib/util/iov.o 00:02:02.467 CC lib/util/pipe.o 00:02:02.467 CC lib/util/math.o 00:02:02.467 CC lib/util/net.o 00:02:02.467 CC lib/util/strerror_tls.o 00:02:02.467 CC lib/util/string.o 00:02:02.467 CC lib/util/uuid.o 00:02:02.467 CC lib/util/md5.o 00:02:02.467 CC lib/util/xor.o 00:02:02.467 CC lib/util/zipf.o 00:02:02.467 LIB libspdk_dma.a 00:02:02.467 CC lib/vfio_user/host/vfio_user.o 00:02:02.467 CC lib/vfio_user/host/vfio_user_pci.o 00:02:02.467 LIB libspdk_ioat.a 00:02:02.726 LIB libspdk_vfio_user.a 00:02:02.726 LIB libspdk_util.a 00:02:02.985 LIB libspdk_trace_parser.a 00:02:02.985 CC lib/rdma_utils/rdma_utils.o 00:02:02.985 CC lib/conf/conf.o 00:02:02.985 CC lib/json/json_parse.o 00:02:02.985 CC lib/json/json_util.o 00:02:02.985 CC lib/json/json_write.o 00:02:02.985 CC lib/vmd/vmd.o 00:02:02.985 CC lib/idxd/idxd.o 00:02:02.985 CC lib/vmd/led.o 00:02:02.985 CC lib/rdma_provider/common.o 00:02:02.985 CC lib/idxd/idxd_user.o 00:02:02.985 CC lib/idxd/idxd_kernel.o 00:02:02.985 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.985 CC lib/env_dpdk/env.o 00:02:02.985 CC lib/env_dpdk/memory.o 00:02:02.985 CC lib/env_dpdk/pci.o 00:02:02.985 CC lib/env_dpdk/threads.o 00:02:02.985 CC lib/env_dpdk/init.o 00:02:02.985 CC lib/env_dpdk/pci_virtio.o 00:02:02.985 CC lib/env_dpdk/pci_ioat.o 00:02:02.985 CC lib/env_dpdk/pci_vmd.o 00:02:02.985 CC lib/env_dpdk/pci_idxd.o 00:02:02.985 CC lib/env_dpdk/pci_event.o 00:02:02.985 CC lib/env_dpdk/sigbus_handler.o 00:02:02.985 CC lib/env_dpdk/pci_dpdk.o 00:02:02.985 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.985 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:03.244 LIB libspdk_conf.a 00:02:03.244 LIB libspdk_rdma_provider.a 00:02:03.244 LIB libspdk_rdma_utils.a 00:02:03.244 LIB libspdk_json.a 00:02:03.244 LIB libspdk_idxd.a 00:02:03.503 LIB libspdk_vmd.a 00:02:03.503 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.503 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.503 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.503 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.761 LIB libspdk_jsonrpc.a 00:02:04.020 LIB libspdk_env_dpdk.a 00:02:04.020 CC lib/rpc/rpc.o 00:02:04.020 LIB libspdk_rpc.a 00:02:04.587 CC lib/keyring/keyring.o 00:02:04.587 CC lib/keyring/keyring_rpc.o 00:02:04.587 CC lib/notify/notify.o 00:02:04.587 CC lib/trace/trace.o 00:02:04.587 CC lib/notify/notify_rpc.o 00:02:04.587 CC lib/trace/trace_flags.o 00:02:04.587 CC lib/trace/trace_rpc.o 00:02:04.587 LIB libspdk_notify.a 00:02:04.587 LIB libspdk_trace.a 00:02:04.587 LIB libspdk_keyring.a 00:02:04.847 CC lib/thread/thread.o 00:02:04.847 CC lib/thread/iobuf.o 00:02:04.847 CC lib/sock/sock.o 00:02:04.847 CC lib/sock/sock_rpc.o 00:02:05.106 LIB libspdk_sock.a 00:02:05.364 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.364 CC lib/nvme/nvme_ctrlr.o 00:02:05.364 CC lib/nvme/nvme_ns_cmd.o 00:02:05.364 CC lib/nvme/nvme_fabric.o 00:02:05.364 CC lib/nvme/nvme_pcie_common.o 00:02:05.364 CC lib/nvme/nvme_ns.o 00:02:05.364 CC lib/nvme/nvme_pcie.o 00:02:05.364 CC lib/nvme/nvme_quirks.o 00:02:05.364 CC lib/nvme/nvme_qpair.o 00:02:05.364 CC lib/nvme/nvme.o 00:02:05.364 CC lib/nvme/nvme_transport.o 00:02:05.364 CC lib/nvme/nvme_discovery.o 00:02:05.364 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.364 CC lib/nvme/nvme_opal.o 00:02:05.364 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.364 CC lib/nvme/nvme_tcp.o 00:02:05.364 CC lib/nvme/nvme_io_msg.o 00:02:05.364 CC lib/nvme/nvme_poll_group.o 00:02:05.364 CC lib/nvme/nvme_zns.o 00:02:05.364 CC lib/nvme/nvme_stubs.o 00:02:05.364 CC lib/nvme/nvme_cuse.o 00:02:05.364 CC lib/nvme/nvme_auth.o 00:02:05.364 CC lib/nvme/nvme_vfio_user.o 00:02:05.364 CC lib/nvme/nvme_rdma.o 00:02:05.623 LIB libspdk_thread.a 00:02:05.882 CC lib/init/json_config.o 00:02:05.882 CC lib/init/subsystem.o 00:02:05.882 CC lib/init/subsystem_rpc.o 00:02:05.882 CC lib/init/rpc.o 00:02:05.882 CC lib/accel/accel_rpc.o 00:02:05.882 CC lib/accel/accel.o 00:02:05.882 CC lib/accel/accel_sw.o 00:02:05.882 CC lib/virtio/virtio_vfio_user.o 00:02:05.882 CC lib/virtio/virtio.o 00:02:05.882 CC lib/virtio/virtio_vhost_user.o 00:02:05.882 CC lib/vfu_tgt/tgt_endpoint.o 00:02:05.882 CC lib/virtio/virtio_pci.o 00:02:05.882 CC lib/vfu_tgt/tgt_rpc.o 00:02:05.882 CC lib/blob/request.o 00:02:05.882 CC lib/blob/blobstore.o 00:02:05.882 CC lib/blob/zeroes.o 00:02:05.882 CC lib/blob/blob_bs_dev.o 00:02:05.882 CC lib/fsdev/fsdev.o 00:02:05.882 CC lib/fsdev/fsdev_io.o 00:02:05.882 CC lib/fsdev/fsdev_rpc.o 00:02:06.140 LIB libspdk_init.a 00:02:06.140 LIB libspdk_virtio.a 00:02:06.140 LIB libspdk_vfu_tgt.a 00:02:06.398 LIB libspdk_fsdev.a 00:02:06.398 CC lib/event/reactor.o 00:02:06.398 CC lib/event/app.o 00:02:06.398 CC lib/event/log_rpc.o 00:02:06.398 CC lib/event/app_rpc.o 00:02:06.398 CC lib/event/scheduler_static.o 00:02:06.657 LIB libspdk_event.a 00:02:06.657 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:06.657 LIB libspdk_accel.a 00:02:06.657 LIB libspdk_nvme.a 00:02:06.916 CC lib/bdev/bdev.o 00:02:06.916 CC lib/bdev/bdev_rpc.o 00:02:06.916 CC lib/bdev/bdev_zone.o 00:02:06.916 CC lib/bdev/part.o 00:02:06.916 CC lib/bdev/scsi_nvme.o 00:02:06.916 LIB libspdk_fuse_dispatcher.a 00:02:07.484 LIB libspdk_blob.a 00:02:08.052 CC lib/lvol/lvol.o 00:02:08.052 CC lib/blobfs/blobfs.o 00:02:08.052 CC lib/blobfs/tree.o 00:02:08.310 LIB libspdk_lvol.a 00:02:08.311 LIB libspdk_blobfs.a 00:02:08.569 LIB libspdk_bdev.a 00:02:08.828 CC lib/nbd/nbd.o 00:02:08.828 CC lib/nbd/nbd_rpc.o 00:02:08.828 CC lib/ublk/ublk.o 00:02:08.828 CC lib/ublk/ublk_rpc.o 00:02:08.828 CC lib/nvmf/ctrlr.o 00:02:08.828 CC lib/nvmf/ctrlr_bdev.o 00:02:08.828 CC lib/scsi/lun.o 00:02:08.828 CC lib/nvmf/ctrlr_discovery.o 00:02:08.828 CC lib/nvmf/nvmf.o 00:02:08.828 CC lib/scsi/dev.o 00:02:08.828 CC lib/scsi/scsi.o 00:02:08.828 CC lib/nvmf/subsystem.o 00:02:08.828 CC lib/scsi/port.o 00:02:08.828 CC lib/nvmf/nvmf_rpc.o 00:02:08.828 CC lib/nvmf/tcp.o 00:02:08.828 CC lib/scsi/scsi_bdev.o 00:02:08.828 CC lib/nvmf/transport.o 00:02:08.828 CC lib/scsi/scsi_pr.o 00:02:08.828 CC lib/scsi/task.o 00:02:08.828 CC lib/nvmf/mdns_server.o 00:02:08.828 CC lib/nvmf/stubs.o 00:02:08.828 CC lib/ftl/ftl_core.o 00:02:08.828 CC lib/scsi/scsi_rpc.o 00:02:08.828 CC lib/ftl/ftl_init.o 00:02:08.828 CC lib/nvmf/vfio_user.o 00:02:08.828 CC lib/ftl/ftl_layout.o 00:02:08.828 CC lib/nvmf/rdma.o 00:02:08.828 CC lib/nvmf/auth.o 00:02:08.828 CC lib/ftl/ftl_debug.o 00:02:08.828 CC lib/ftl/ftl_sb.o 00:02:08.828 CC lib/ftl/ftl_io.o 00:02:08.828 CC lib/ftl/ftl_l2p.o 00:02:08.828 CC lib/ftl/ftl_l2p_flat.o 00:02:08.828 CC lib/ftl/ftl_band.o 00:02:08.828 CC lib/ftl/ftl_band_ops.o 00:02:08.828 CC lib/ftl/ftl_nv_cache.o 00:02:08.828 CC lib/ftl/ftl_writer.o 00:02:08.828 CC lib/ftl/ftl_rq.o 00:02:08.828 CC lib/ftl/ftl_reloc.o 00:02:08.828 CC lib/ftl/ftl_l2p_cache.o 00:02:08.828 CC lib/ftl/ftl_p2l.o 00:02:08.828 CC lib/ftl/ftl_p2l_log.o 00:02:08.828 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.828 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.828 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:09.087 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:09.087 CC lib/ftl/utils/ftl_conf.o 00:02:09.087 CC lib/ftl/utils/ftl_md.o 00:02:09.087 CC lib/ftl/utils/ftl_bitmap.o 00:02:09.087 CC lib/ftl/utils/ftl_property.o 00:02:09.087 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:09.087 CC lib/ftl/utils/ftl_mempool.o 00:02:09.087 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:09.087 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:09.087 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:09.087 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:09.087 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:09.087 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:09.087 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:09.087 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:09.087 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:09.087 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:09.087 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:09.087 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:09.087 CC lib/ftl/base/ftl_base_dev.o 00:02:09.087 CC lib/ftl/base/ftl_base_bdev.o 00:02:09.087 CC lib/ftl/ftl_trace.o 00:02:09.087 LIB libspdk_nbd.a 00:02:09.346 LIB libspdk_ublk.a 00:02:09.346 LIB libspdk_scsi.a 00:02:09.605 LIB libspdk_ftl.a 00:02:09.605 CC lib/iscsi/conn.o 00:02:09.864 CC lib/iscsi/init_grp.o 00:02:09.864 CC lib/iscsi/iscsi.o 00:02:09.864 CC lib/iscsi/tgt_node.o 00:02:09.864 CC lib/iscsi/param.o 00:02:09.864 CC lib/iscsi/iscsi_subsystem.o 00:02:09.864 CC lib/iscsi/portal_grp.o 00:02:09.864 CC lib/vhost/vhost.o 00:02:09.864 CC lib/vhost/vhost_scsi.o 00:02:09.864 CC lib/vhost/vhost_rpc.o 00:02:09.864 CC lib/iscsi/iscsi_rpc.o 00:02:09.864 CC lib/iscsi/task.o 00:02:09.864 CC lib/vhost/vhost_blk.o 00:02:09.864 CC lib/vhost/rte_vhost_user.o 00:02:10.123 LIB libspdk_nvmf.a 00:02:10.382 LIB libspdk_vhost.a 00:02:10.382 LIB libspdk_iscsi.a 00:02:10.950 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.950 CC module/vfu_device/vfu_virtio.o 00:02:10.950 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.950 CC module/vfu_device/vfu_virtio_fs.o 00:02:10.950 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.950 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.950 LIB libspdk_env_dpdk_rpc.a 00:02:10.950 CC module/keyring/file/keyring.o 00:02:10.950 CC module/keyring/file/keyring_rpc.o 00:02:10.950 CC module/sock/posix/posix.o 00:02:10.950 CC module/accel/iaa/accel_iaa.o 00:02:10.950 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.950 CC module/keyring/linux/keyring_rpc.o 00:02:10.950 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.950 CC module/keyring/linux/keyring.o 00:02:10.950 CC module/accel/dsa/accel_dsa.o 00:02:10.950 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.950 CC module/blob/bdev/blob_bdev.o 00:02:10.950 CC module/fsdev/aio/fsdev_aio.o 00:02:10.950 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.950 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:10.950 CC module/fsdev/aio/linux_aio_mgr.o 00:02:10.950 CC module/accel/error/accel_error.o 00:02:10.950 CC module/accel/error/accel_error_rpc.o 00:02:11.208 CC module/accel/ioat/accel_ioat.o 00:02:11.208 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.208 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.208 LIB libspdk_keyring_file.a 00:02:11.208 LIB libspdk_keyring_linux.a 00:02:11.208 LIB libspdk_scheduler_gscheduler.a 00:02:11.208 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.208 LIB libspdk_accel_error.a 00:02:11.208 LIB libspdk_accel_iaa.a 00:02:11.208 LIB libspdk_scheduler_dynamic.a 00:02:11.208 LIB libspdk_accel_ioat.a 00:02:11.208 LIB libspdk_blob_bdev.a 00:02:11.208 LIB libspdk_accel_dsa.a 00:02:11.466 LIB libspdk_vfu_device.a 00:02:11.466 LIB libspdk_sock_posix.a 00:02:11.466 LIB libspdk_fsdev_aio.a 00:02:11.725 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.725 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.725 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.725 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.725 CC module/bdev/gpt/gpt.o 00:02:11.725 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.725 CC module/bdev/nvme/bdev_nvme.o 00:02:11.725 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.725 CC module/bdev/nvme/nvme_rpc.o 00:02:11.725 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.725 CC module/bdev/null/bdev_null.o 00:02:11.725 CC module/bdev/nvme/vbdev_opal.o 00:02:11.725 CC module/bdev/error/vbdev_error.o 00:02:11.725 CC module/bdev/null/bdev_null_rpc.o 00:02:11.725 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.725 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.725 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.725 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.725 CC module/bdev/raid/bdev_raid.o 00:02:11.725 CC module/bdev/delay/vbdev_delay.o 00:02:11.725 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.725 CC module/bdev/malloc/bdev_malloc.o 00:02:11.725 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.725 CC module/bdev/raid/raid1.o 00:02:11.725 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.725 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.725 CC module/bdev/raid/concat.o 00:02:11.725 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.725 CC module/bdev/raid/raid0.o 00:02:11.725 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.725 CC module/bdev/aio/bdev_aio.o 00:02:11.725 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.725 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.725 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.725 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.725 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.725 CC module/bdev/ftl/bdev_ftl.o 00:02:11.725 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.725 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.725 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.725 CC module/bdev/split/vbdev_split.o 00:02:11.725 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.725 LIB libspdk_blobfs_bdev.a 00:02:11.984 LIB libspdk_bdev_null.a 00:02:11.985 LIB libspdk_bdev_split.a 00:02:11.985 LIB libspdk_bdev_gpt.a 00:02:11.985 LIB libspdk_bdev_error.a 00:02:11.985 LIB libspdk_bdev_zone_block.a 00:02:11.985 LIB libspdk_bdev_ftl.a 00:02:11.985 LIB libspdk_bdev_aio.a 00:02:11.985 LIB libspdk_bdev_passthru.a 00:02:11.985 LIB libspdk_bdev_delay.a 00:02:11.985 LIB libspdk_bdev_iscsi.a 00:02:11.985 LIB libspdk_bdev_malloc.a 00:02:11.985 LIB libspdk_bdev_lvol.a 00:02:11.985 LIB libspdk_bdev_virtio.a 00:02:12.244 LIB libspdk_bdev_raid.a 00:02:12.811 LIB libspdk_bdev_nvme.a 00:02:13.750 CC module/event/subsystems/iobuf/iobuf.o 00:02:13.750 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:13.750 CC module/event/subsystems/sock/sock.o 00:02:13.750 CC module/event/subsystems/scheduler/scheduler.o 00:02:13.750 CC module/event/subsystems/vmd/vmd.o 00:02:13.750 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:13.750 CC module/event/subsystems/fsdev/fsdev.o 00:02:13.750 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:13.750 CC module/event/subsystems/keyring/keyring.o 00:02:13.750 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:13.750 LIB libspdk_event_sock.a 00:02:13.750 LIB libspdk_event_scheduler.a 00:02:13.750 LIB libspdk_event_vmd.a 00:02:13.750 LIB libspdk_event_vhost_blk.a 00:02:13.750 LIB libspdk_event_iobuf.a 00:02:13.750 LIB libspdk_event_keyring.a 00:02:13.750 LIB libspdk_event_fsdev.a 00:02:13.750 LIB libspdk_event_vfu_tgt.a 00:02:14.008 CC module/event/subsystems/accel/accel.o 00:02:14.008 LIB libspdk_event_accel.a 00:02:14.267 CC module/event/subsystems/bdev/bdev.o 00:02:14.525 LIB libspdk_event_bdev.a 00:02:14.783 CC module/event/subsystems/scsi/scsi.o 00:02:14.783 CC module/event/subsystems/ublk/ublk.o 00:02:14.783 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:14.783 CC module/event/subsystems/nbd/nbd.o 00:02:14.783 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.041 LIB libspdk_event_scsi.a 00:02:15.041 LIB libspdk_event_ublk.a 00:02:15.041 LIB libspdk_event_nbd.a 00:02:15.041 LIB libspdk_event_nvmf.a 00:02:15.300 CC module/event/subsystems/iscsi/iscsi.o 00:02:15.300 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:15.300 LIB libspdk_event_vhost_scsi.a 00:02:15.300 LIB libspdk_event_iscsi.a 00:02:15.556 CC test/rpc_client/rpc_client_test.o 00:02:15.556 TEST_HEADER include/spdk/accel.h 00:02:15.556 TEST_HEADER include/spdk/base64.h 00:02:15.556 TEST_HEADER include/spdk/accel_module.h 00:02:15.556 TEST_HEADER include/spdk/assert.h 00:02:15.556 TEST_HEADER include/spdk/barrier.h 00:02:15.556 TEST_HEADER include/spdk/bdev.h 00:02:15.556 TEST_HEADER include/spdk/bit_array.h 00:02:15.556 TEST_HEADER include/spdk/bdev_zone.h 00:02:15.556 TEST_HEADER include/spdk/bdev_module.h 00:02:15.556 TEST_HEADER include/spdk/bit_pool.h 00:02:15.556 TEST_HEADER include/spdk/blob_bdev.h 00:02:15.556 TEST_HEADER include/spdk/blobfs.h 00:02:15.556 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:15.556 TEST_HEADER include/spdk/conf.h 00:02:15.556 TEST_HEADER include/spdk/blob.h 00:02:15.556 TEST_HEADER include/spdk/cpuset.h 00:02:15.556 TEST_HEADER include/spdk/crc16.h 00:02:15.556 TEST_HEADER include/spdk/config.h 00:02:15.557 TEST_HEADER include/spdk/crc32.h 00:02:15.557 TEST_HEADER include/spdk/crc64.h 00:02:15.557 TEST_HEADER include/spdk/dif.h 00:02:15.557 TEST_HEADER include/spdk/dma.h 00:02:15.557 TEST_HEADER include/spdk/env_dpdk.h 00:02:15.557 TEST_HEADER include/spdk/env.h 00:02:15.557 TEST_HEADER include/spdk/endian.h 00:02:15.557 TEST_HEADER include/spdk/event.h 00:02:15.557 TEST_HEADER include/spdk/fd_group.h 00:02:15.557 TEST_HEADER include/spdk/fd.h 00:02:15.557 CC app/trace_record/trace_record.o 00:02:15.557 TEST_HEADER include/spdk/file.h 00:02:15.557 TEST_HEADER include/spdk/fsdev_module.h 00:02:15.557 TEST_HEADER include/spdk/ftl.h 00:02:15.557 TEST_HEADER include/spdk/fsdev.h 00:02:15.557 TEST_HEADER include/spdk/gpt_spec.h 00:02:15.557 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:15.557 TEST_HEADER include/spdk/hexlify.h 00:02:15.557 TEST_HEADER include/spdk/idxd.h 00:02:15.557 TEST_HEADER include/spdk/histogram_data.h 00:02:15.557 TEST_HEADER include/spdk/idxd_spec.h 00:02:15.557 TEST_HEADER include/spdk/ioat.h 00:02:15.557 TEST_HEADER include/spdk/init.h 00:02:15.557 CC app/spdk_top/spdk_top.o 00:02:15.557 CC app/spdk_nvme_discover/discovery_aer.o 00:02:15.557 CC app/spdk_nvme_perf/perf.o 00:02:15.557 TEST_HEADER include/spdk/ioat_spec.h 00:02:15.557 TEST_HEADER include/spdk/iscsi_spec.h 00:02:15.557 TEST_HEADER include/spdk/json.h 00:02:15.557 TEST_HEADER include/spdk/jsonrpc.h 00:02:15.557 TEST_HEADER include/spdk/keyring.h 00:02:15.557 TEST_HEADER include/spdk/keyring_module.h 00:02:15.822 TEST_HEADER include/spdk/likely.h 00:02:15.822 TEST_HEADER include/spdk/log.h 00:02:15.822 CXX app/trace/trace.o 00:02:15.822 TEST_HEADER include/spdk/lvol.h 00:02:15.822 TEST_HEADER include/spdk/md5.h 00:02:15.822 CC app/spdk_nvme_identify/identify.o 00:02:15.822 TEST_HEADER include/spdk/memory.h 00:02:15.822 TEST_HEADER include/spdk/mmio.h 00:02:15.822 TEST_HEADER include/spdk/nbd.h 00:02:15.822 TEST_HEADER include/spdk/notify.h 00:02:15.822 TEST_HEADER include/spdk/net.h 00:02:15.822 TEST_HEADER include/spdk/nvme.h 00:02:15.822 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:15.822 TEST_HEADER include/spdk/nvme_intel.h 00:02:15.822 TEST_HEADER include/spdk/nvme_spec.h 00:02:15.822 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:15.822 TEST_HEADER include/spdk/nvme_zns.h 00:02:15.822 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:15.822 TEST_HEADER include/spdk/nvmf.h 00:02:15.822 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:15.822 TEST_HEADER include/spdk/nvmf_spec.h 00:02:15.822 CC app/spdk_lspci/spdk_lspci.o 00:02:15.822 TEST_HEADER include/spdk/opal_spec.h 00:02:15.822 TEST_HEADER include/spdk/nvmf_transport.h 00:02:15.822 TEST_HEADER include/spdk/opal.h 00:02:15.822 TEST_HEADER include/spdk/pci_ids.h 00:02:15.822 TEST_HEADER include/spdk/pipe.h 00:02:15.822 TEST_HEADER include/spdk/queue.h 00:02:15.822 TEST_HEADER include/spdk/reduce.h 00:02:15.822 TEST_HEADER include/spdk/rpc.h 00:02:15.822 TEST_HEADER include/spdk/scheduler.h 00:02:15.822 CC app/nvmf_tgt/nvmf_main.o 00:02:15.822 TEST_HEADER include/spdk/scsi.h 00:02:15.822 TEST_HEADER include/spdk/sock.h 00:02:15.822 TEST_HEADER include/spdk/stdinc.h 00:02:15.822 TEST_HEADER include/spdk/scsi_spec.h 00:02:15.822 TEST_HEADER include/spdk/string.h 00:02:15.822 TEST_HEADER include/spdk/thread.h 00:02:15.822 CC app/spdk_dd/spdk_dd.o 00:02:15.822 TEST_HEADER include/spdk/trace_parser.h 00:02:15.822 TEST_HEADER include/spdk/trace.h 00:02:15.822 TEST_HEADER include/spdk/tree.h 00:02:15.822 TEST_HEADER include/spdk/util.h 00:02:15.822 TEST_HEADER include/spdk/ublk.h 00:02:15.822 TEST_HEADER include/spdk/version.h 00:02:15.822 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:15.822 TEST_HEADER include/spdk/uuid.h 00:02:15.822 TEST_HEADER include/spdk/vhost.h 00:02:15.822 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:15.822 TEST_HEADER include/spdk/xor.h 00:02:15.822 TEST_HEADER include/spdk/vmd.h 00:02:15.822 CXX test/cpp_headers/accel.o 00:02:15.822 CXX test/cpp_headers/accel_module.o 00:02:15.822 TEST_HEADER include/spdk/zipf.h 00:02:15.822 CXX test/cpp_headers/assert.o 00:02:15.822 CXX test/cpp_headers/base64.o 00:02:15.822 CXX test/cpp_headers/bdev.o 00:02:15.822 CXX test/cpp_headers/bdev_module.o 00:02:15.822 CXX test/cpp_headers/barrier.o 00:02:15.822 CXX test/cpp_headers/bdev_zone.o 00:02:15.822 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:15.822 CXX test/cpp_headers/bit_pool.o 00:02:15.822 CXX test/cpp_headers/bit_array.o 00:02:15.822 CXX test/cpp_headers/blob_bdev.o 00:02:15.822 CXX test/cpp_headers/blobfs_bdev.o 00:02:15.822 CXX test/cpp_headers/blobfs.o 00:02:15.822 CXX test/cpp_headers/blob.o 00:02:15.822 CC app/iscsi_tgt/iscsi_tgt.o 00:02:15.822 CXX test/cpp_headers/conf.o 00:02:15.822 CXX test/cpp_headers/cpuset.o 00:02:15.822 CXX test/cpp_headers/config.o 00:02:15.822 CXX test/cpp_headers/crc32.o 00:02:15.822 CXX test/cpp_headers/crc64.o 00:02:15.822 CXX test/cpp_headers/crc16.o 00:02:15.822 CXX test/cpp_headers/dif.o 00:02:15.822 CXX test/cpp_headers/dma.o 00:02:15.822 CXX test/cpp_headers/endian.o 00:02:15.822 CXX test/cpp_headers/env_dpdk.o 00:02:15.822 CXX test/cpp_headers/env.o 00:02:15.822 CXX test/cpp_headers/fd_group.o 00:02:15.822 CXX test/cpp_headers/fd.o 00:02:15.822 CXX test/cpp_headers/event.o 00:02:15.822 CXX test/cpp_headers/file.o 00:02:15.822 CXX test/cpp_headers/fsdev_module.o 00:02:15.822 CXX test/cpp_headers/fsdev.o 00:02:15.822 CXX test/cpp_headers/ftl.o 00:02:15.822 CXX test/cpp_headers/fuse_dispatcher.o 00:02:15.822 CXX test/cpp_headers/gpt_spec.o 00:02:15.822 CXX test/cpp_headers/hexlify.o 00:02:15.822 CXX test/cpp_headers/histogram_data.o 00:02:15.822 CC app/spdk_tgt/spdk_tgt.o 00:02:15.822 CXX test/cpp_headers/idxd_spec.o 00:02:15.822 CXX test/cpp_headers/init.o 00:02:15.822 CXX test/cpp_headers/idxd.o 00:02:15.822 CXX test/cpp_headers/ioat_spec.o 00:02:15.822 CXX test/cpp_headers/ioat.o 00:02:15.822 CXX test/cpp_headers/json.o 00:02:15.822 CXX test/cpp_headers/iscsi_spec.o 00:02:15.822 CXX test/cpp_headers/jsonrpc.o 00:02:15.822 CXX test/cpp_headers/keyring.o 00:02:15.822 CXX test/cpp_headers/keyring_module.o 00:02:15.822 CXX test/cpp_headers/likely.o 00:02:15.822 CXX test/cpp_headers/log.o 00:02:15.822 CXX test/cpp_headers/lvol.o 00:02:15.822 CXX test/cpp_headers/md5.o 00:02:15.822 CXX test/cpp_headers/memory.o 00:02:15.822 CC test/thread/poller_perf/poller_perf.o 00:02:15.822 CXX test/cpp_headers/mmio.o 00:02:15.822 CXX test/cpp_headers/nbd.o 00:02:15.822 CXX test/cpp_headers/net.o 00:02:15.822 CXX test/cpp_headers/notify.o 00:02:15.822 CXX test/cpp_headers/nvme.o 00:02:15.822 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.822 CXX test/cpp_headers/nvme_intel.o 00:02:15.822 CXX test/cpp_headers/nvme_ocssd.o 00:02:15.822 CXX test/cpp_headers/nvme_spec.o 00:02:15.822 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:15.822 CXX test/cpp_headers/nvme_zns.o 00:02:15.822 CXX test/cpp_headers/nvmf_cmd.o 00:02:15.822 CC test/env/pci/pci_ut.o 00:02:15.822 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:15.822 CC test/thread/lock/spdk_lock.o 00:02:15.822 CXX test/cpp_headers/nvmf_spec.o 00:02:15.822 CXX test/cpp_headers/nvmf.o 00:02:15.822 CXX test/cpp_headers/nvmf_transport.o 00:02:15.822 CXX test/cpp_headers/opal.o 00:02:15.822 CC test/env/vtophys/vtophys.o 00:02:15.822 CXX test/cpp_headers/opal_spec.o 00:02:15.822 CXX test/cpp_headers/pci_ids.o 00:02:15.822 CXX test/cpp_headers/pipe.o 00:02:15.822 LINK rpc_client_test 00:02:15.822 CXX test/cpp_headers/reduce.o 00:02:15.822 CC test/env/memory/memory_ut.o 00:02:15.822 CXX test/cpp_headers/queue.o 00:02:15.822 CXX test/cpp_headers/rpc.o 00:02:15.822 CXX test/cpp_headers/scsi.o 00:02:15.822 CXX test/cpp_headers/scheduler.o 00:02:15.822 CXX test/cpp_headers/scsi_spec.o 00:02:15.822 CXX test/cpp_headers/sock.o 00:02:15.822 CXX test/cpp_headers/stdinc.o 00:02:15.822 CXX test/cpp_headers/string.o 00:02:15.822 CXX test/cpp_headers/thread.o 00:02:15.822 CC test/app/histogram_perf/histogram_perf.o 00:02:15.822 CC test/app/stub/stub.o 00:02:15.822 CXX test/cpp_headers/trace.o 00:02:15.822 CC test/app/jsoncat/jsoncat.o 00:02:15.822 CC examples/util/zipf/zipf.o 00:02:15.822 CC examples/ioat/perf/perf.o 00:02:15.822 CC test/dma/test_dma/test_dma.o 00:02:15.822 CXX test/cpp_headers/trace_parser.o 00:02:15.822 CC examples/ioat/verify/verify.o 00:02:15.822 CC app/fio/nvme/fio_plugin.o 00:02:15.822 CXX test/cpp_headers/tree.o 00:02:15.822 CC test/env/mem_callbacks/mem_callbacks.o 00:02:15.822 LINK spdk_lspci 00:02:15.822 CC test/app/bdev_svc/bdev_svc.o 00:02:15.822 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:15.822 CXX test/cpp_headers/ublk.o 00:02:15.822 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:15.822 CC app/fio/bdev/fio_plugin.o 00:02:15.822 LINK spdk_trace_record 00:02:15.822 LINK spdk_nvme_discover 00:02:16.081 CXX test/cpp_headers/util.o 00:02:16.081 LINK nvmf_tgt 00:02:16.081 CXX test/cpp_headers/uuid.o 00:02:16.081 CXX test/cpp_headers/version.o 00:02:16.081 CXX test/cpp_headers/vfio_user_pci.o 00:02:16.081 CXX test/cpp_headers/vfio_user_spec.o 00:02:16.081 CXX test/cpp_headers/vhost.o 00:02:16.081 CXX test/cpp_headers/vmd.o 00:02:16.081 LINK vtophys 00:02:16.081 CXX test/cpp_headers/xor.o 00:02:16.081 CXX test/cpp_headers/zipf.o 00:02:16.081 LINK poller_perf 00:02:16.081 LINK jsoncat 00:02:16.081 LINK histogram_perf 00:02:16.081 LINK interrupt_tgt 00:02:16.081 LINK env_dpdk_post_init 00:02:16.081 LINK zipf 00:02:16.081 LINK stub 00:02:16.081 LINK iscsi_tgt 00:02:16.081 LINK ioat_perf 00:02:16.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.081 LINK spdk_tgt 00:02:16.081 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:16.081 LINK verify 00:02:16.081 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:16.081 LINK bdev_svc 00:02:16.081 LINK spdk_trace 00:02:16.081 LINK spdk_dd 00:02:16.339 LINK nvme_fuzz 00:02:16.339 LINK test_dma 00:02:16.339 LINK spdk_nvme 00:02:16.339 LINK pci_ut 00:02:16.339 LINK llvm_vfio_fuzz 00:02:16.339 LINK spdk_nvme_perf 00:02:16.339 LINK vhost_fuzz 00:02:16.339 LINK spdk_bdev 00:02:16.339 LINK spdk_nvme_identify 00:02:16.339 LINK mem_callbacks 00:02:16.339 LINK spdk_top 00:02:16.629 CC examples/sock/hello_world/hello_sock.o 00:02:16.629 CC examples/vmd/lsvmd/lsvmd.o 00:02:16.629 CC examples/idxd/perf/perf.o 00:02:16.629 CC examples/vmd/led/led.o 00:02:16.629 LINK llvm_nvme_fuzz 00:02:16.629 CC examples/thread/thread/thread_ex.o 00:02:16.629 CC app/vhost/vhost.o 00:02:16.629 LINK memory_ut 00:02:16.630 LINK lsvmd 00:02:16.630 LINK led 00:02:16.887 LINK hello_sock 00:02:16.887 LINK spdk_lock 00:02:16.887 LINK idxd_perf 00:02:16.887 LINK vhost 00:02:16.887 LINK thread 00:02:16.887 LINK iscsi_fuzz 00:02:17.453 CC examples/nvme/hello_world/hello_world.o 00:02:17.453 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:17.453 CC test/event/reactor/reactor.o 00:02:17.453 CC examples/nvme/reconnect/reconnect.o 00:02:17.453 CC test/event/reactor_perf/reactor_perf.o 00:02:17.453 CC examples/nvme/arbitration/arbitration.o 00:02:17.453 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:17.453 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:17.453 CC examples/nvme/abort/abort.o 00:02:17.453 CC examples/nvme/hotplug/hotplug.o 00:02:17.453 CC test/event/event_perf/event_perf.o 00:02:17.453 CC test/event/scheduler/scheduler.o 00:02:17.453 CC test/event/app_repeat/app_repeat.o 00:02:17.453 LINK reactor 00:02:17.453 LINK reactor_perf 00:02:17.453 LINK pmr_persistence 00:02:17.453 LINK hello_world 00:02:17.712 LINK event_perf 00:02:17.712 LINK cmb_copy 00:02:17.712 LINK app_repeat 00:02:17.712 LINK hotplug 00:02:17.712 LINK scheduler 00:02:17.712 LINK arbitration 00:02:17.712 LINK abort 00:02:17.712 LINK nvme_manage 00:02:17.712 LINK reconnect 00:02:17.969 CC test/nvme/boot_partition/boot_partition.o 00:02:17.969 CC test/nvme/cuse/cuse.o 00:02:17.969 CC test/nvme/fused_ordering/fused_ordering.o 00:02:17.969 CC test/nvme/overhead/overhead.o 00:02:17.969 CC test/nvme/e2edp/nvme_dp.o 00:02:17.969 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:17.969 CC test/nvme/compliance/nvme_compliance.o 00:02:17.969 CC test/nvme/sgl/sgl.o 00:02:17.969 CC test/nvme/reserve/reserve.o 00:02:17.969 CC test/nvme/startup/startup.o 00:02:17.969 CC test/nvme/reset/reset.o 00:02:17.969 CC test/nvme/err_injection/err_injection.o 00:02:17.969 CC test/nvme/fdp/fdp.o 00:02:17.969 CC test/nvme/simple_copy/simple_copy.o 00:02:17.969 CC test/nvme/connect_stress/connect_stress.o 00:02:17.969 CC test/nvme/aer/aer.o 00:02:17.969 CC test/blobfs/mkfs/mkfs.o 00:02:17.969 CC test/accel/dif/dif.o 00:02:17.969 LINK boot_partition 00:02:17.969 CC test/lvol/esnap/esnap.o 00:02:17.969 LINK fused_ordering 00:02:17.969 LINK startup 00:02:17.969 LINK err_injection 00:02:17.969 LINK doorbell_aers 00:02:17.969 LINK connect_stress 00:02:17.969 LINK reserve 00:02:17.969 LINK nvme_dp 00:02:17.969 LINK simple_copy 00:02:17.969 LINK sgl 00:02:17.969 LINK reset 00:02:17.969 LINK overhead 00:02:17.969 LINK aer 00:02:18.228 LINK fdp 00:02:18.228 LINK mkfs 00:02:18.228 LINK nvme_compliance 00:02:18.488 LINK dif 00:02:18.488 CC examples/blob/cli/blobcli.o 00:02:18.488 CC examples/accel/perf/accel_perf.o 00:02:18.488 CC examples/blob/hello_world/hello_blob.o 00:02:18.488 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:18.747 LINK cuse 00:02:18.747 LINK hello_blob 00:02:18.747 LINK hello_fsdev 00:02:18.747 LINK blobcli 00:02:18.747 LINK accel_perf 00:02:19.684 CC examples/bdev/hello_world/hello_bdev.o 00:02:19.684 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.684 LINK hello_bdev 00:02:19.944 CC test/bdev/bdevio/bdevio.o 00:02:19.944 LINK bdevperf 00:02:20.203 LINK bdevio 00:02:21.584 LINK esnap 00:02:21.584 CC examples/nvmf/nvmf/nvmf.o 00:02:21.584 LINK nvmf 00:02:22.962 00:02:22.962 real 0m45.482s 00:02:22.962 user 6m13.948s 00:02:22.962 sys 2m32.956s 00:02:22.962 21:46:07 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:22.962 21:46:07 make -- common/autotest_common.sh@10 -- $ set +x 00:02:22.962 ************************************ 00:02:22.962 END TEST make 00:02:22.962 ************************************ 00:02:22.962 21:46:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:22.962 21:46:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:22.962 21:46:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:22.962 21:46:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.963 21:46:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:22.963 21:46:07 -- pm/common@44 -- $ pid=917976 00:02:22.963 21:46:07 -- pm/common@50 -- $ kill -TERM 917976 00:02:22.963 21:46:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.963 21:46:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:22.963 21:46:07 -- pm/common@44 -- $ pid=917978 00:02:22.963 21:46:07 -- pm/common@50 -- $ kill -TERM 917978 00:02:22.963 21:46:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.963 21:46:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:22.963 21:46:07 -- pm/common@44 -- $ pid=917980 00:02:22.963 21:46:07 -- pm/common@50 -- $ kill -TERM 917980 00:02:22.963 21:46:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.963 21:46:07 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:22.963 21:46:07 -- pm/common@44 -- $ pid=918008 00:02:22.963 21:46:07 -- pm/common@50 -- $ sudo -E kill -TERM 918008 00:02:22.963 21:46:07 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:22.963 21:46:07 -- common/autotest_common.sh@1681 -- # lcov --version 00:02:22.963 21:46:07 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:23.222 21:46:07 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:23.222 21:46:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:23.222 21:46:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:23.222 21:46:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:23.222 21:46:07 -- scripts/common.sh@336 -- # IFS=.-: 00:02:23.222 21:46:07 -- scripts/common.sh@336 -- # read -ra ver1 00:02:23.222 21:46:07 -- scripts/common.sh@337 -- # IFS=.-: 00:02:23.222 21:46:07 -- scripts/common.sh@337 -- # read -ra ver2 00:02:23.222 21:46:07 -- scripts/common.sh@338 -- # local 'op=<' 00:02:23.222 21:46:07 -- scripts/common.sh@340 -- # ver1_l=2 00:02:23.222 21:46:07 -- scripts/common.sh@341 -- # ver2_l=1 00:02:23.222 21:46:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:23.222 21:46:07 -- scripts/common.sh@344 -- # case "$op" in 00:02:23.222 21:46:07 -- scripts/common.sh@345 -- # : 1 00:02:23.222 21:46:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:23.222 21:46:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:23.222 21:46:07 -- scripts/common.sh@365 -- # decimal 1 00:02:23.222 21:46:07 -- scripts/common.sh@353 -- # local d=1 00:02:23.222 21:46:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:23.222 21:46:07 -- scripts/common.sh@355 -- # echo 1 00:02:23.222 21:46:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:23.222 21:46:07 -- scripts/common.sh@366 -- # decimal 2 00:02:23.222 21:46:07 -- scripts/common.sh@353 -- # local d=2 00:02:23.222 21:46:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:23.222 21:46:07 -- scripts/common.sh@355 -- # echo 2 00:02:23.222 21:46:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:23.222 21:46:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:23.222 21:46:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:23.222 21:46:07 -- scripts/common.sh@368 -- # return 0 00:02:23.222 21:46:07 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:23.222 21:46:07 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:23.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.222 --rc genhtml_branch_coverage=1 00:02:23.222 --rc genhtml_function_coverage=1 00:02:23.222 --rc genhtml_legend=1 00:02:23.222 --rc geninfo_all_blocks=1 00:02:23.222 --rc geninfo_unexecuted_blocks=1 00:02:23.222 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:23.222 ' 00:02:23.222 21:46:07 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:23.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.222 --rc genhtml_branch_coverage=1 00:02:23.222 --rc genhtml_function_coverage=1 00:02:23.222 --rc genhtml_legend=1 00:02:23.222 --rc geninfo_all_blocks=1 00:02:23.222 --rc geninfo_unexecuted_blocks=1 00:02:23.222 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:23.222 ' 00:02:23.222 21:46:07 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:23.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.222 --rc genhtml_branch_coverage=1 00:02:23.222 --rc genhtml_function_coverage=1 00:02:23.222 --rc genhtml_legend=1 00:02:23.222 --rc geninfo_all_blocks=1 00:02:23.222 --rc geninfo_unexecuted_blocks=1 00:02:23.222 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:23.222 ' 00:02:23.222 21:46:07 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:23.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.222 --rc genhtml_branch_coverage=1 00:02:23.222 --rc genhtml_function_coverage=1 00:02:23.222 --rc genhtml_legend=1 00:02:23.222 --rc geninfo_all_blocks=1 00:02:23.222 --rc geninfo_unexecuted_blocks=1 00:02:23.222 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:23.222 ' 00:02:23.222 21:46:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.222 21:46:07 -- nvmf/common.sh@7 -- # uname -s 00:02:23.222 21:46:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.222 21:46:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.222 21:46:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.223 21:46:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.223 21:46:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.223 21:46:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.223 21:46:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.223 21:46:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.223 21:46:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.223 21:46:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.223 21:46:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:23.223 21:46:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:23.223 21:46:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.223 21:46:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.223 21:46:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:23.223 21:46:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.223 21:46:07 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:23.223 21:46:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:23.223 21:46:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.223 21:46:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.223 21:46:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.223 21:46:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.223 21:46:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.223 21:46:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.223 21:46:07 -- paths/export.sh@5 -- # export PATH 00:02:23.223 21:46:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.223 21:46:07 -- nvmf/common.sh@51 -- # : 0 00:02:23.223 21:46:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:23.223 21:46:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:23.223 21:46:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.223 21:46:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.223 21:46:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.223 21:46:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:23.223 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:23.223 21:46:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:23.223 21:46:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:23.223 21:46:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:23.223 21:46:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.223 21:46:07 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.223 21:46:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.223 21:46:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.223 21:46:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:23.223 21:46:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.223 21:46:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:23.223 21:46:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.223 21:46:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.223 21:46:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.223 21:46:07 -- spdk/autotest.sh@48 -- # udevadm_pid=981602 00:02:23.223 21:46:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.223 21:46:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.223 21:46:07 -- pm/common@17 -- # local monitor 00:02:23.223 21:46:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.223 21:46:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.223 21:46:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.223 21:46:07 -- pm/common@21 -- # date +%s 00:02:23.223 21:46:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.223 21:46:07 -- pm/common@21 -- # date +%s 00:02:23.223 21:46:07 -- pm/common@21 -- # date +%s 00:02:23.223 21:46:07 -- pm/common@25 -- # sleep 1 00:02:23.223 21:46:07 -- pm/common@21 -- # date +%s 00:02:23.223 21:46:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727725567 00:02:23.223 21:46:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727725567 00:02:23.223 21:46:07 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727725567 00:02:23.223 21:46:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1727725567 00:02:23.223 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727725567_collect-cpu-temp.pm.log 00:02:23.223 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727725567_collect-cpu-load.pm.log 00:02:23.223 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727725567_collect-vmstat.pm.log 00:02:23.223 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1727725567_collect-bmc-pm.bmc.pm.log 00:02:24.161 21:46:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:24.161 21:46:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:24.161 21:46:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:24.161 21:46:08 -- common/autotest_common.sh@10 -- # set +x 00:02:24.161 21:46:08 -- spdk/autotest.sh@59 -- # create_test_list 00:02:24.161 21:46:08 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:24.161 21:46:08 -- common/autotest_common.sh@10 -- # set +x 00:02:24.161 21:46:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:24.161 21:46:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:24.161 21:46:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:24.161 21:46:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:24.161 21:46:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:24.161 21:46:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:24.161 21:46:08 -- common/autotest_common.sh@1455 -- # uname 00:02:24.161 21:46:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:24.161 21:46:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:24.161 21:46:08 -- common/autotest_common.sh@1475 -- # uname 00:02:24.161 21:46:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:24.161 21:46:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:24.161 21:46:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:24.420 lcov: LCOV version 1.15 00:02:24.420 21:46:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:32.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:33.117 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:02:41.241 21:46:24 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:41.241 21:46:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:41.241 21:46:24 -- common/autotest_common.sh@10 -- # set +x 00:02:41.241 21:46:24 -- spdk/autotest.sh@78 -- # rm -f 00:02:41.241 21:46:24 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.207 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.207 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.207 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.207 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.207 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.506 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.766 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:43.766 21:46:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:43.766 21:46:27 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:43.766 21:46:27 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:43.766 21:46:27 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:43.766 21:46:27 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:43.766 21:46:27 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:43.766 21:46:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:43.766 21:46:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.766 21:46:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:43.766 21:46:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:43.766 21:46:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:43.766 21:46:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:43.766 21:46:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:43.766 21:46:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:43.766 21:46:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:43.766 No valid GPT data, bailing 00:02:43.766 21:46:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.766 21:46:27 -- scripts/common.sh@394 -- # pt= 00:02:43.766 21:46:27 -- scripts/common.sh@395 -- # return 1 00:02:43.766 21:46:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:43.766 1+0 records in 00:02:43.766 1+0 records out 00:02:43.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439033 s, 239 MB/s 00:02:43.766 21:46:27 -- spdk/autotest.sh@105 -- # sync 00:02:43.766 21:46:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:43.766 21:46:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:43.766 21:46:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.340 21:46:34 -- spdk/autotest.sh@111 -- # uname -s 00:02:50.340 21:46:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:50.340 21:46:34 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:02:50.340 21:46:34 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.340 21:46:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:50.340 21:46:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:50.340 21:46:34 -- common/autotest_common.sh@10 -- # set +x 00:02:50.340 ************************************ 00:02:50.340 START TEST setup.sh 00:02:50.340 ************************************ 00:02:50.340 21:46:34 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.340 * Looking for test storage... 00:02:50.340 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:50.340 21:46:34 setup.sh -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:50.340 21:46:34 setup.sh -- common/autotest_common.sh@1681 -- # lcov --version 00:02:50.340 21:46:34 setup.sh -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:50.340 21:46:34 setup.sh -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@345 -- # : 1 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@353 -- # local d=1 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@355 -- # echo 1 00:02:50.340 21:46:34 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@353 -- # local d=2 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@355 -- # echo 2 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:50.601 21:46:34 setup.sh -- scripts/common.sh@368 -- # return 0 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.601 --rc genhtml_branch_coverage=1 00:02:50.601 --rc genhtml_function_coverage=1 00:02:50.601 --rc genhtml_legend=1 00:02:50.601 --rc geninfo_all_blocks=1 00:02:50.601 --rc geninfo_unexecuted_blocks=1 00:02:50.601 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.601 ' 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.601 --rc genhtml_branch_coverage=1 00:02:50.601 --rc genhtml_function_coverage=1 00:02:50.601 --rc genhtml_legend=1 00:02:50.601 --rc geninfo_all_blocks=1 00:02:50.601 --rc geninfo_unexecuted_blocks=1 00:02:50.601 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.601 ' 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.601 --rc genhtml_branch_coverage=1 00:02:50.601 --rc genhtml_function_coverage=1 00:02:50.601 --rc genhtml_legend=1 00:02:50.601 --rc geninfo_all_blocks=1 00:02:50.601 --rc geninfo_unexecuted_blocks=1 00:02:50.601 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.601 ' 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.601 --rc genhtml_branch_coverage=1 00:02:50.601 --rc genhtml_function_coverage=1 00:02:50.601 --rc genhtml_legend=1 00:02:50.601 --rc geninfo_all_blocks=1 00:02:50.601 --rc geninfo_unexecuted_blocks=1 00:02:50.601 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.601 ' 00:02:50.601 21:46:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:50.601 21:46:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.601 21:46:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:50.601 21:46:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.601 ************************************ 00:02:50.601 START TEST acl 00:02:50.601 ************************************ 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:02:50.601 * Looking for test storage... 00:02:50.601 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1681 -- # lcov --version 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:50.601 21:46:34 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:02:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.601 --rc genhtml_branch_coverage=1 00:02:50.601 --rc genhtml_function_coverage=1 00:02:50.601 --rc genhtml_legend=1 00:02:50.601 --rc geninfo_all_blocks=1 00:02:50.601 --rc geninfo_unexecuted_blocks=1 00:02:50.601 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.601 ' 00:02:50.601 21:46:34 setup.sh.acl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:02:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.601 --rc genhtml_branch_coverage=1 00:02:50.601 --rc genhtml_function_coverage=1 00:02:50.601 --rc genhtml_legend=1 00:02:50.601 --rc geninfo_all_blocks=1 00:02:50.601 --rc geninfo_unexecuted_blocks=1 00:02:50.601 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.601 ' 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:02:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.602 --rc genhtml_branch_coverage=1 00:02:50.602 --rc genhtml_function_coverage=1 00:02:50.602 --rc genhtml_legend=1 00:02:50.602 --rc geninfo_all_blocks=1 00:02:50.602 --rc geninfo_unexecuted_blocks=1 00:02:50.602 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.602 ' 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:02:50.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:50.602 --rc genhtml_branch_coverage=1 00:02:50.602 --rc genhtml_function_coverage=1 00:02:50.602 --rc genhtml_legend=1 00:02:50.602 --rc geninfo_all_blocks=1 00:02:50.602 --rc geninfo_unexecuted_blocks=1 00:02:50.602 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:50.602 ' 00:02:50.602 21:46:34 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.602 21:46:34 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:50.602 21:46:34 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.602 21:46:34 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.602 21:46:34 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.602 21:46:34 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.602 21:46:34 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.602 21:46:34 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.602 21:46:34 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.803 21:46:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:54.803 21:46:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:54.803 21:46:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:54.803 21:46:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:54.803 21:46:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.803 21:46:38 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:58.095 Hugepages 00:02:58.095 node hugesize free / total 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 00:02:58.095 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.095 21:46:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.095 21:46:42 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.095 21:46:42 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.095 21:46:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.095 ************************************ 00:02:58.095 START TEST denied 00:02:58.095 ************************************ 00:02:58.095 21:46:42 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:58.095 21:46:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:58.095 21:46:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.095 21:46:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:58.095 21:46:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.095 21:46:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:01.384 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.384 21:46:45 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.660 00:03:06.660 real 0m7.668s 00:03:06.660 user 0m2.366s 00:03:06.660 sys 0m4.609s 00:03:06.660 21:46:50 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:06.660 21:46:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:06.660 ************************************ 00:03:06.660 END TEST denied 00:03:06.660 ************************************ 00:03:06.660 21:46:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:06.660 21:46:50 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:06.660 21:46:50 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:06.660 21:46:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.660 ************************************ 00:03:06.660 START TEST allowed 00:03:06.660 ************************************ 00:03:06.660 21:46:50 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:06.660 21:46:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:06.660 21:46:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:06.660 21:46:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:06.660 21:46:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.660 21:46:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:10.856 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.856 21:46:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:10.856 21:46:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:10.856 21:46:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:10.856 21:46:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.856 21:46:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.050 00:03:15.050 real 0m9.019s 00:03:15.050 user 0m2.430s 00:03:15.050 sys 0m5.175s 00:03:15.050 21:46:59 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.050 21:46:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:15.050 ************************************ 00:03:15.050 END TEST allowed 00:03:15.050 ************************************ 00:03:15.050 00:03:15.050 real 0m24.415s 00:03:15.050 user 0m7.600s 00:03:15.050 sys 0m15.006s 00:03:15.050 21:46:59 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.050 21:46:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.050 ************************************ 00:03:15.050 END TEST acl 00:03:15.050 ************************************ 00:03:15.050 21:46:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:15.050 21:46:59 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.050 21:46:59 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.050 21:46:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:15.050 ************************************ 00:03:15.050 START TEST hugepages 00:03:15.050 ************************************ 00:03:15.050 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:15.050 * Looking for test storage... 00:03:15.050 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.050 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:15.051 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lcov --version 00:03:15.051 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:15.312 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.312 21:46:59 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:03:15.312 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.312 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.312 --rc genhtml_branch_coverage=1 00:03:15.312 --rc genhtml_function_coverage=1 00:03:15.312 --rc genhtml_legend=1 00:03:15.312 --rc geninfo_all_blocks=1 00:03:15.312 --rc geninfo_unexecuted_blocks=1 00:03:15.312 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.312 ' 00:03:15.312 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.312 --rc genhtml_branch_coverage=1 00:03:15.312 --rc genhtml_function_coverage=1 00:03:15.312 --rc genhtml_legend=1 00:03:15.312 --rc geninfo_all_blocks=1 00:03:15.312 --rc geninfo_unexecuted_blocks=1 00:03:15.312 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.312 ' 00:03:15.312 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.312 --rc genhtml_branch_coverage=1 00:03:15.312 --rc genhtml_function_coverage=1 00:03:15.312 --rc genhtml_legend=1 00:03:15.312 --rc geninfo_all_blocks=1 00:03:15.312 --rc geninfo_unexecuted_blocks=1 00:03:15.312 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.312 ' 00:03:15.312 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:15.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.312 --rc genhtml_branch_coverage=1 00:03:15.312 --rc genhtml_function_coverage=1 00:03:15.312 --rc genhtml_legend=1 00:03:15.312 --rc geninfo_all_blocks=1 00:03:15.312 --rc geninfo_unexecuted_blocks=1 00:03:15.312 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:15.312 ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 42148808 kB' 'MemAvailable: 43241484 kB' 'Buffers: 3740 kB' 'Cached: 9677776 kB' 'SwapCached: 964 kB' 'Active: 8949212 kB' 'Inactive: 1315444 kB' 'Active(anon): 8693112 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585488 kB' 'Mapped: 217592 kB' 'Shmem: 8222756 kB' 'KReclaimable: 266108 kB' 'Slab: 1192312 kB' 'SReclaimable: 266108 kB' 'SUnreclaim: 926204 kB' 'KernelStack: 21936 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433360 kB' 'Committed_AS: 10082764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216896 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.312 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.313 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:15.314 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:15.314 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.314 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.314 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.314 ************************************ 00:03:15.314 START TEST single_node_setup 00:03:15.314 ************************************ 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.314 21:46:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:18.609 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.609 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.609 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.609 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.610 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.610 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.610 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.610 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:18.870 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:20.249 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.512 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44321288 kB' 'MemAvailable: 45413980 kB' 'Buffers: 3740 kB' 'Cached: 9677932 kB' 'SwapCached: 964 kB' 'Active: 8951484 kB' 'Inactive: 1315444 kB' 'Active(anon): 8695384 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587044 kB' 'Mapped: 217712 kB' 'Shmem: 8222912 kB' 'KReclaimable: 266140 kB' 'Slab: 1191188 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925048 kB' 'KernelStack: 21952 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10084056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216912 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.513 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44321736 kB' 'MemAvailable: 45414428 kB' 'Buffers: 3740 kB' 'Cached: 9677932 kB' 'SwapCached: 964 kB' 'Active: 8951132 kB' 'Inactive: 1315444 kB' 'Active(anon): 8695032 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586796 kB' 'Mapped: 217692 kB' 'Shmem: 8222912 kB' 'KReclaimable: 266140 kB' 'Slab: 1191172 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925032 kB' 'KernelStack: 22048 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10084076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216960 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.514 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44321140 kB' 'MemAvailable: 45413832 kB' 'Buffers: 3740 kB' 'Cached: 9677948 kB' 'SwapCached: 964 kB' 'Active: 8950084 kB' 'Inactive: 1315444 kB' 'Active(anon): 8693984 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586100 kB' 'Mapped: 217616 kB' 'Shmem: 8222928 kB' 'KReclaimable: 266140 kB' 'Slab: 1191140 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925000 kB' 'KernelStack: 22000 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10084096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216976 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.515 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:20.516 nr_hugepages=1024 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:20.516 resv_hugepages=0 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:20.516 surplus_hugepages=0 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:20.516 anon_hugepages=0 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.516 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44321140 kB' 'MemAvailable: 45413832 kB' 'Buffers: 3740 kB' 'Cached: 9677948 kB' 'SwapCached: 964 kB' 'Active: 8950508 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694408 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586020 kB' 'Mapped: 217616 kB' 'Shmem: 8222928 kB' 'KReclaimable: 266140 kB' 'Slab: 1191140 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925000 kB' 'KernelStack: 21888 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10084120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216992 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.517 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 25714512 kB' 'MemUsed: 6919924 kB' 'SwapCached: 4 kB' 'Active: 3454632 kB' 'Inactive: 257880 kB' 'Active(anon): 3258136 kB' 'Inactive(anon): 676 kB' 'Active(file): 196496 kB' 'Inactive(file): 257204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3553360 kB' 'Mapped: 87404 kB' 'AnonPages: 162368 kB' 'Shmem: 3099656 kB' 'KernelStack: 10824 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89112 kB' 'Slab: 495172 kB' 'SReclaimable: 89112 kB' 'SUnreclaim: 406060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.518 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:20.519 node0=1024 expecting 1024 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.519 00:03:20.519 real 0m5.204s 00:03:20.519 user 0m1.396s 00:03:20.519 sys 0m2.413s 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:20.519 21:47:04 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.519 ************************************ 00:03:20.519 END TEST single_node_setup 00:03:20.519 ************************************ 00:03:20.519 21:47:04 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:20.519 21:47:04 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.519 21:47:04 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.519 21:47:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.519 ************************************ 00:03:20.519 START TEST even_2G_alloc 00:03:20.519 ************************************ 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.519 21:47:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:23.816 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.816 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44328152 kB' 'MemAvailable: 45420844 kB' 'Buffers: 3740 kB' 'Cached: 9678092 kB' 'SwapCached: 964 kB' 'Active: 8951080 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694980 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586972 kB' 'Mapped: 216656 kB' 'Shmem: 8223072 kB' 'KReclaimable: 266140 kB' 'Slab: 1191548 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925408 kB' 'KernelStack: 22048 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10078496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217104 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.816 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.817 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44328060 kB' 'MemAvailable: 45420752 kB' 'Buffers: 3740 kB' 'Cached: 9678092 kB' 'SwapCached: 964 kB' 'Active: 8951012 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694912 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586856 kB' 'Mapped: 216636 kB' 'Shmem: 8223072 kB' 'KReclaimable: 266140 kB' 'Slab: 1191548 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925408 kB' 'KernelStack: 22016 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10077012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217040 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.818 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.819 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44328680 kB' 'MemAvailable: 45421372 kB' 'Buffers: 3740 kB' 'Cached: 9678112 kB' 'SwapCached: 964 kB' 'Active: 8950680 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694580 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586516 kB' 'Mapped: 216636 kB' 'Shmem: 8223092 kB' 'KReclaimable: 266140 kB' 'Slab: 1191552 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925412 kB' 'KernelStack: 21904 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10078400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217040 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.820 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.821 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:23.822 nr_hugepages=1024 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:23.822 resv_hugepages=0 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:23.822 surplus_hugepages=0 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:23.822 anon_hugepages=0 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44330604 kB' 'MemAvailable: 45423296 kB' 'Buffers: 3740 kB' 'Cached: 9678132 kB' 'SwapCached: 964 kB' 'Active: 8951280 kB' 'Inactive: 1315444 kB' 'Active(anon): 8695180 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587092 kB' 'Mapped: 216636 kB' 'Shmem: 8223112 kB' 'KReclaimable: 266140 kB' 'Slab: 1191552 kB' 'SReclaimable: 266140 kB' 'SUnreclaim: 925412 kB' 'KernelStack: 22080 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10078556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217040 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.822 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.823 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26784940 kB' 'MemUsed: 5849496 kB' 'SwapCached: 4 kB' 'Active: 3453988 kB' 'Inactive: 257880 kB' 'Active(anon): 3257492 kB' 'Inactive(anon): 676 kB' 'Active(file): 196496 kB' 'Inactive(file): 257204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3553496 kB' 'Mapped: 86612 kB' 'AnonPages: 161516 kB' 'Shmem: 3099792 kB' 'KernelStack: 10792 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89112 kB' 'Slab: 495332 kB' 'SReclaimable: 89112 kB' 'SUnreclaim: 406220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.824 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.825 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649384 kB' 'MemFree: 17543196 kB' 'MemUsed: 10106188 kB' 'SwapCached: 960 kB' 'Active: 5496820 kB' 'Inactive: 1057564 kB' 'Active(anon): 5437216 kB' 'Inactive(anon): 112108 kB' 'Active(file): 59604 kB' 'Inactive(file): 945456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6129340 kB' 'Mapped: 130024 kB' 'AnonPages: 425096 kB' 'Shmem: 5123320 kB' 'KernelStack: 11208 kB' 'PageTables: 5240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177028 kB' 'Slab: 696124 kB' 'SReclaimable: 177028 kB' 'SUnreclaim: 519096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.826 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:23.827 node0=512 expecting 512 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:23.827 node1=512 expecting 512 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:23.827 00:03:23.827 real 0m3.249s 00:03:23.827 user 0m1.205s 00:03:23.827 sys 0m2.061s 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.827 21:47:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.827 ************************************ 00:03:23.827 END TEST even_2G_alloc 00:03:23.827 ************************************ 00:03:23.827 21:47:08 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:23.827 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.827 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.827 21:47:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.087 ************************************ 00:03:24.087 START TEST odd_alloc 00:03:24.087 ************************************ 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.087 21:47:08 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:27.382 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.382 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44347932 kB' 'MemAvailable: 45440576 kB' 'Buffers: 3740 kB' 'Cached: 9678268 kB' 'SwapCached: 964 kB' 'Active: 8950496 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694396 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585764 kB' 'Mapped: 216736 kB' 'Shmem: 8223248 kB' 'KReclaimable: 266044 kB' 'Slab: 1190124 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924080 kB' 'KernelStack: 21904 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10076568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217040 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44348488 kB' 'MemAvailable: 45441132 kB' 'Buffers: 3740 kB' 'Cached: 9678268 kB' 'SwapCached: 964 kB' 'Active: 8950580 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694480 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585852 kB' 'Mapped: 216720 kB' 'Shmem: 8223248 kB' 'KReclaimable: 266044 kB' 'Slab: 1190124 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924080 kB' 'KernelStack: 21888 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10076584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217008 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.383 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.384 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44349240 kB' 'MemAvailable: 45441884 kB' 'Buffers: 3740 kB' 'Cached: 9678308 kB' 'SwapCached: 964 kB' 'Active: 8949448 kB' 'Inactive: 1315444 kB' 'Active(anon): 8693348 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585096 kB' 'Mapped: 216644 kB' 'Shmem: 8223288 kB' 'KReclaimable: 266044 kB' 'Slab: 1190084 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924040 kB' 'KernelStack: 21872 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10076604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217008 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:27.386 nr_hugepages=1025 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:27.386 resv_hugepages=0 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:27.386 surplus_hugepages=0 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:27.386 anon_hugepages=0 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.386 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44349436 kB' 'MemAvailable: 45442080 kB' 'Buffers: 3740 kB' 'Cached: 9678312 kB' 'SwapCached: 964 kB' 'Active: 8949688 kB' 'Inactive: 1315444 kB' 'Active(anon): 8693588 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585340 kB' 'Mapped: 216644 kB' 'Shmem: 8223292 kB' 'KReclaimable: 266044 kB' 'Slab: 1190084 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924040 kB' 'KernelStack: 21872 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480912 kB' 'Committed_AS: 10076624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217008 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.387 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26795516 kB' 'MemUsed: 5838920 kB' 'SwapCached: 4 kB' 'Active: 3454908 kB' 'Inactive: 257880 kB' 'Active(anon): 3258412 kB' 'Inactive(anon): 676 kB' 'Active(file): 196496 kB' 'Inactive(file): 257204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3553668 kB' 'Mapped: 86620 kB' 'AnonPages: 162312 kB' 'Shmem: 3099964 kB' 'KernelStack: 10872 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89016 kB' 'Slab: 493988 kB' 'SReclaimable: 89016 kB' 'SUnreclaim: 404972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649384 kB' 'MemFree: 17554312 kB' 'MemUsed: 10095072 kB' 'SwapCached: 960 kB' 'Active: 5494580 kB' 'Inactive: 1057564 kB' 'Active(anon): 5434976 kB' 'Inactive(anon): 112108 kB' 'Active(file): 59604 kB' 'Inactive(file): 945456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6129384 kB' 'Mapped: 130024 kB' 'AnonPages: 422784 kB' 'Shmem: 5123364 kB' 'KernelStack: 11000 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177028 kB' 'Slab: 696096 kB' 'SReclaimable: 177028 kB' 'SUnreclaim: 519068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.389 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:27.390 node0=513 expecting 513 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:27.390 node1=512 expecting 512 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:27.390 00:03:27.390 real 0m3.381s 00:03:27.390 user 0m1.256s 00:03:27.390 sys 0m2.154s 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.390 21:47:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.390 ************************************ 00:03:27.390 END TEST odd_alloc 00:03:27.390 ************************************ 00:03:27.390 21:47:11 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:27.390 21:47:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.390 21:47:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.390 21:47:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.390 ************************************ 00:03:27.390 START TEST custom_alloc 00:03:27.390 ************************************ 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:27.390 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.391 21:47:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:30.682 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.682 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.946 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.946 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 43309880 kB' 'MemAvailable: 44402524 kB' 'Buffers: 3740 kB' 'Cached: 9678444 kB' 'SwapCached: 964 kB' 'Active: 8951204 kB' 'Inactive: 1315444 kB' 'Active(anon): 8695104 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586340 kB' 'Mapped: 216752 kB' 'Shmem: 8223424 kB' 'KReclaimable: 266044 kB' 'Slab: 1190516 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924472 kB' 'KernelStack: 21872 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10077272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217008 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.947 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 43310156 kB' 'MemAvailable: 44402800 kB' 'Buffers: 3740 kB' 'Cached: 9678448 kB' 'SwapCached: 964 kB' 'Active: 8950500 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694400 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586096 kB' 'Mapped: 216656 kB' 'Shmem: 8223428 kB' 'KReclaimable: 266044 kB' 'Slab: 1190484 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924440 kB' 'KernelStack: 21888 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10077288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216976 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.948 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.949 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 43315096 kB' 'MemAvailable: 44407740 kB' 'Buffers: 3740 kB' 'Cached: 9678484 kB' 'SwapCached: 964 kB' 'Active: 8950136 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694036 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585644 kB' 'Mapped: 216656 kB' 'Shmem: 8223464 kB' 'KReclaimable: 266044 kB' 'Slab: 1190452 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924408 kB' 'KernelStack: 21856 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10077308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216960 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.950 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.951 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:30.952 nr_hugepages=1536 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:30.952 resv_hugepages=0 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:30.952 surplus_hugepages=0 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:30.952 anon_hugepages=0 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.952 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 43313088 kB' 'MemAvailable: 44405732 kB' 'Buffers: 3740 kB' 'Cached: 9678488 kB' 'SwapCached: 964 kB' 'Active: 8950508 kB' 'Inactive: 1315444 kB' 'Active(anon): 8694408 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 586060 kB' 'Mapped: 216656 kB' 'Shmem: 8223468 kB' 'KReclaimable: 266044 kB' 'Slab: 1190452 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924408 kB' 'KernelStack: 21872 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957648 kB' 'Committed_AS: 10077332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216976 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.953 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:30.954 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 26785276 kB' 'MemUsed: 5849160 kB' 'SwapCached: 4 kB' 'Active: 3455752 kB' 'Inactive: 257880 kB' 'Active(anon): 3259256 kB' 'Inactive(anon): 676 kB' 'Active(file): 196496 kB' 'Inactive(file): 257204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3553740 kB' 'Mapped: 86632 kB' 'AnonPages: 163132 kB' 'Shmem: 3100036 kB' 'KernelStack: 10856 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89016 kB' 'Slab: 494428 kB' 'SReclaimable: 89016 kB' 'SUnreclaim: 405412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.216 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27649384 kB' 'MemFree: 16529544 kB' 'MemUsed: 11119840 kB' 'SwapCached: 960 kB' 'Active: 5495076 kB' 'Inactive: 1057564 kB' 'Active(anon): 5435472 kB' 'Inactive(anon): 112108 kB' 'Active(file): 59604 kB' 'Inactive(file): 945456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6129472 kB' 'Mapped: 130024 kB' 'AnonPages: 423216 kB' 'Shmem: 5123452 kB' 'KernelStack: 11000 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 177028 kB' 'Slab: 696024 kB' 'SReclaimable: 177028 kB' 'SUnreclaim: 518996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.217 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:31.218 node0=512 expecting 512 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:31.218 node1=1024 expecting 1024 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:31.218 00:03:31.218 real 0m3.705s 00:03:31.218 user 0m1.362s 00:03:31.218 sys 0m2.411s 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.218 21:47:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.218 ************************************ 00:03:31.218 END TEST custom_alloc 00:03:31.218 ************************************ 00:03:31.218 21:47:15 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:31.218 21:47:15 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.219 21:47:15 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.219 21:47:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.219 ************************************ 00:03:31.219 START TEST no_shrink_alloc 00:03:31.219 ************************************ 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.219 21:47:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:34.515 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.515 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44373696 kB' 'MemAvailable: 45466340 kB' 'Buffers: 3740 kB' 'Cached: 9678612 kB' 'SwapCached: 964 kB' 'Active: 8952380 kB' 'Inactive: 1315444 kB' 'Active(anon): 8696280 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587876 kB' 'Mapped: 216704 kB' 'Shmem: 8223592 kB' 'KReclaimable: 266044 kB' 'Slab: 1191080 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 925036 kB' 'KernelStack: 21920 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10080588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217216 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.515 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.516 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44374976 kB' 'MemAvailable: 45467620 kB' 'Buffers: 3740 kB' 'Cached: 9678616 kB' 'SwapCached: 964 kB' 'Active: 8952200 kB' 'Inactive: 1315444 kB' 'Active(anon): 8696100 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587528 kB' 'Mapped: 216680 kB' 'Shmem: 8223596 kB' 'KReclaimable: 266044 kB' 'Slab: 1191080 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 925036 kB' 'KernelStack: 21920 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10080608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217200 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.517 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.518 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44377188 kB' 'MemAvailable: 45469832 kB' 'Buffers: 3740 kB' 'Cached: 9678632 kB' 'SwapCached: 964 kB' 'Active: 8952908 kB' 'Inactive: 1315444 kB' 'Active(anon): 8696808 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588328 kB' 'Mapped: 216696 kB' 'Shmem: 8223612 kB' 'KReclaimable: 266044 kB' 'Slab: 1191024 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924980 kB' 'KernelStack: 22000 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10080628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217248 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.519 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.520 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:34.521 nr_hugepages=1024 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:34.521 resv_hugepages=0 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:34.521 surplus_hugepages=0 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:34.521 anon_hugepages=0 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44378324 kB' 'MemAvailable: 45470968 kB' 'Buffers: 3740 kB' 'Cached: 9678656 kB' 'SwapCached: 964 kB' 'Active: 8952368 kB' 'Inactive: 1315444 kB' 'Active(anon): 8696268 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587708 kB' 'Mapped: 216680 kB' 'Shmem: 8223636 kB' 'KReclaimable: 266044 kB' 'Slab: 1190928 kB' 'SReclaimable: 266044 kB' 'SUnreclaim: 924884 kB' 'KernelStack: 21904 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10079152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217168 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.521 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:34.522 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 25737176 kB' 'MemUsed: 6897260 kB' 'SwapCached: 4 kB' 'Active: 3455416 kB' 'Inactive: 257880 kB' 'Active(anon): 3258920 kB' 'Inactive(anon): 676 kB' 'Active(file): 196496 kB' 'Inactive(file): 257204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3553748 kB' 'Mapped: 86656 kB' 'AnonPages: 162660 kB' 'Shmem: 3100044 kB' 'KernelStack: 10968 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89016 kB' 'Slab: 494948 kB' 'SReclaimable: 89016 kB' 'SUnreclaim: 405932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.523 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:34.524 node0=1024 expecting 1024 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.524 21:47:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:37.819 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.819 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.819 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44360680 kB' 'MemAvailable: 45453276 kB' 'Buffers: 3740 kB' 'Cached: 9678764 kB' 'SwapCached: 964 kB' 'Active: 8954120 kB' 'Inactive: 1315444 kB' 'Active(anon): 8698020 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588792 kB' 'Mapped: 216780 kB' 'Shmem: 8223744 kB' 'KReclaimable: 265948 kB' 'Slab: 1190412 kB' 'SReclaimable: 265948 kB' 'SUnreclaim: 924464 kB' 'KernelStack: 21888 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10081472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217152 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.819 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44360108 kB' 'MemAvailable: 45452704 kB' 'Buffers: 3740 kB' 'Cached: 9678768 kB' 'SwapCached: 964 kB' 'Active: 8954004 kB' 'Inactive: 1315444 kB' 'Active(anon): 8697904 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588716 kB' 'Mapped: 216764 kB' 'Shmem: 8223748 kB' 'KReclaimable: 265948 kB' 'Slab: 1190412 kB' 'SReclaimable: 265948 kB' 'SUnreclaim: 924464 kB' 'KernelStack: 22000 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10081492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217264 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.821 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44356108 kB' 'MemAvailable: 45448704 kB' 'Buffers: 3740 kB' 'Cached: 9678784 kB' 'SwapCached: 964 kB' 'Active: 8953772 kB' 'Inactive: 1315444 kB' 'Active(anon): 8697672 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588880 kB' 'Mapped: 216688 kB' 'Shmem: 8223764 kB' 'KReclaimable: 265948 kB' 'Slab: 1190404 kB' 'SReclaimable: 265948 kB' 'SUnreclaim: 924456 kB' 'KernelStack: 21984 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10079904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217216 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.822 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.823 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:37.824 nr_hugepages=1024 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:37.824 resv_hugepages=0 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:37.824 surplus_hugepages=0 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:37.824 anon_hugepages=0 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.824 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283820 kB' 'MemFree: 44356352 kB' 'MemAvailable: 45448948 kB' 'Buffers: 3740 kB' 'Cached: 9678804 kB' 'SwapCached: 964 kB' 'Active: 8953480 kB' 'Inactive: 1315444 kB' 'Active(anon): 8697380 kB' 'Inactive(anon): 112784 kB' 'Active(file): 256100 kB' 'Inactive(file): 1202660 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8363516 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588568 kB' 'Mapped: 216688 kB' 'Shmem: 8223784 kB' 'KReclaimable: 265948 kB' 'Slab: 1190404 kB' 'SReclaimable: 265948 kB' 'SUnreclaim: 924456 kB' 'KernelStack: 21984 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481936 kB' 'Committed_AS: 10078320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217168 kB' 'VmallocChunk: 0 kB' 'Percpu: 90048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3134836 kB' 'DirectMap2M: 35348480 kB' 'DirectMap1G: 31457280 kB' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.825 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634436 kB' 'MemFree: 25738488 kB' 'MemUsed: 6895948 kB' 'SwapCached: 4 kB' 'Active: 3455128 kB' 'Inactive: 257880 kB' 'Active(anon): 3258632 kB' 'Inactive(anon): 676 kB' 'Active(file): 196496 kB' 'Inactive(file): 257204 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3553760 kB' 'Mapped: 86664 kB' 'AnonPages: 162396 kB' 'Shmem: 3100056 kB' 'KernelStack: 10872 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88920 kB' 'Slab: 494496 kB' 'SReclaimable: 88920 kB' 'SUnreclaim: 405576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.826 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.827 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:37.828 node0=1024 expecting 1024 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.828 00:03:37.828 real 0m6.649s 00:03:37.828 user 0m2.438s 00:03:37.828 sys 0m4.258s 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.828 21:47:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:37.828 ************************************ 00:03:37.828 END TEST no_shrink_alloc 00:03:37.828 ************************************ 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:37.828 21:47:22 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:37.828 00:03:37.828 real 0m22.875s 00:03:37.828 user 0m7.948s 00:03:37.828 sys 0m13.743s 00:03:37.828 21:47:22 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:37.828 21:47:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.828 ************************************ 00:03:37.828 END TEST hugepages 00:03:37.828 ************************************ 00:03:37.828 21:47:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:37.828 21:47:22 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:37.828 21:47:22 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:37.828 21:47:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.088 ************************************ 00:03:38.088 START TEST driver 00:03:38.088 ************************************ 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:38.088 * Looking for test storage... 00:03:38.088 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1681 -- # lcov --version 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:38.088 21:47:22 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.088 --rc genhtml_branch_coverage=1 00:03:38.088 --rc genhtml_function_coverage=1 00:03:38.088 --rc genhtml_legend=1 00:03:38.088 --rc geninfo_all_blocks=1 00:03:38.088 --rc geninfo_unexecuted_blocks=1 00:03:38.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:38.088 ' 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.088 --rc genhtml_branch_coverage=1 00:03:38.088 --rc genhtml_function_coverage=1 00:03:38.088 --rc genhtml_legend=1 00:03:38.088 --rc geninfo_all_blocks=1 00:03:38.088 --rc geninfo_unexecuted_blocks=1 00:03:38.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:38.088 ' 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.088 --rc genhtml_branch_coverage=1 00:03:38.088 --rc genhtml_function_coverage=1 00:03:38.088 --rc genhtml_legend=1 00:03:38.088 --rc geninfo_all_blocks=1 00:03:38.088 --rc geninfo_unexecuted_blocks=1 00:03:38.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:38.088 ' 00:03:38.088 21:47:22 setup.sh.driver -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:38.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:38.088 --rc genhtml_branch_coverage=1 00:03:38.088 --rc genhtml_function_coverage=1 00:03:38.088 --rc genhtml_legend=1 00:03:38.088 --rc geninfo_all_blocks=1 00:03:38.088 --rc geninfo_unexecuted_blocks=1 00:03:38.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:38.088 ' 00:03:38.088 21:47:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:38.088 21:47:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.088 21:47:22 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.364 21:47:26 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:43.364 21:47:26 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.364 21:47:26 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.364 21:47:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:43.364 ************************************ 00:03:43.364 START TEST guess_driver 00:03:43.364 ************************************ 00:03:43.364 21:47:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:43.364 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:43.365 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:43.365 Looking for driver=vfio-pci 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.365 21:47:26 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:45.903 21:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.903 21:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.806 21:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.806 21:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.806 21:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.806 21:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:47.806 21:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:47.806 21:47:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.807 21:47:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.170 00:03:53.170 real 0m9.753s 00:03:53.170 user 0m2.568s 00:03:53.170 sys 0m4.912s 00:03:53.170 21:47:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.170 21:47:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.170 ************************************ 00:03:53.170 END TEST guess_driver 00:03:53.170 ************************************ 00:03:53.170 00:03:53.170 real 0m14.339s 00:03:53.170 user 0m3.733s 00:03:53.170 sys 0m7.486s 00:03:53.170 21:47:36 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.170 21:47:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:53.170 ************************************ 00:03:53.170 END TEST driver 00:03:53.170 ************************************ 00:03:53.170 21:47:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:53.170 21:47:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.170 21:47:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.170 21:47:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.170 ************************************ 00:03:53.170 START TEST devices 00:03:53.170 ************************************ 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:03:53.170 * Looking for test storage... 00:03:53.170 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1681 -- # lcov --version 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.170 21:47:36 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.170 --rc genhtml_branch_coverage=1 00:03:53.170 --rc genhtml_function_coverage=1 00:03:53.170 --rc genhtml_legend=1 00:03:53.170 --rc geninfo_all_blocks=1 00:03:53.170 --rc geninfo_unexecuted_blocks=1 00:03:53.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.170 ' 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.170 --rc genhtml_branch_coverage=1 00:03:53.170 --rc genhtml_function_coverage=1 00:03:53.170 --rc genhtml_legend=1 00:03:53.170 --rc geninfo_all_blocks=1 00:03:53.170 --rc geninfo_unexecuted_blocks=1 00:03:53.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.170 ' 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.170 --rc genhtml_branch_coverage=1 00:03:53.170 --rc genhtml_function_coverage=1 00:03:53.170 --rc genhtml_legend=1 00:03:53.170 --rc geninfo_all_blocks=1 00:03:53.170 --rc geninfo_unexecuted_blocks=1 00:03:53.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.170 ' 00:03:53.170 21:47:36 setup.sh.devices -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:53.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.170 --rc genhtml_branch_coverage=1 00:03:53.170 --rc genhtml_function_coverage=1 00:03:53.170 --rc genhtml_legend=1 00:03:53.170 --rc geninfo_all_blocks=1 00:03:53.170 --rc geninfo_unexecuted_blocks=1 00:03:53.170 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:53.170 ' 00:03:53.171 21:47:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:53.171 21:47:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:53.171 21:47:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.171 21:47:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:03:56.465 21:47:40 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:56.465 No valid GPT data, bailing 00:03:56.465 21:47:40 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:03:56.465 21:47:40 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:56.465 21:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:56.465 21:47:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:56.465 21:47:40 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.466 21:47:40 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.466 21:47:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.466 ************************************ 00:03:56.466 START TEST nvme_mount 00:03:56.466 ************************************ 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.466 21:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:57.405 Creating new GPT entries in memory. 00:03:57.405 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.405 other utilities. 00:03:57.405 21:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.405 21:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.405 21:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.405 21:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.405 21:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:58.343 Creating new GPT entries in memory. 00:03:58.343 The operation has completed successfully. 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1013389 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.343 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:58.344 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.603 21:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.894 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.894 21:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.894 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:01.894 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:01.894 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.894 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.894 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:01.894 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:01.894 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.894 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.894 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.154 21:47:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.445 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.446 21:47:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:08.736 21:47:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.736 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.736 00:04:08.736 real 0m12.504s 00:04:08.736 user 0m3.686s 00:04:08.736 sys 0m6.761s 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.736 21:47:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:08.736 ************************************ 00:04:08.736 END TEST nvme_mount 00:04:08.736 ************************************ 00:04:08.995 21:47:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:08.995 21:47:53 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.995 21:47:53 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.996 21:47:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:08.996 ************************************ 00:04:08.996 START TEST dm_mount 00:04:08.996 ************************************ 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:08.996 21:47:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:09.933 Creating new GPT entries in memory. 00:04:09.933 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:09.933 other utilities. 00:04:09.933 21:47:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:09.933 21:47:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:09.933 21:47:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:09.933 21:47:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:09.933 21:47:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:10.871 Creating new GPT entries in memory. 00:04:10.871 The operation has completed successfully. 00:04:10.871 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:10.871 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.871 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.871 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.871 21:47:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:12.248 The operation has completed successfully. 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1017821 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.249 21:47:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.542 21:47:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.081 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.082 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.082 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:18.082 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:18.082 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:18.082 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:18.341 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:18.341 00:04:18.341 real 0m9.471s 00:04:18.341 user 0m2.259s 00:04:18.341 sys 0m4.267s 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.341 21:48:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:18.341 ************************************ 00:04:18.341 END TEST dm_mount 00:04:18.341 ************************************ 00:04:18.341 21:48:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:18.341 21:48:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:18.341 21:48:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.342 21:48:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.342 21:48:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.342 21:48:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.342 21:48:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.911 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.911 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.911 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.911 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.911 21:48:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:18.911 00:04:18.911 real 0m26.348s 00:04:18.911 user 0m7.407s 00:04:18.911 sys 0m13.833s 00:04:18.911 21:48:02 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.911 21:48:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.911 ************************************ 00:04:18.911 END TEST devices 00:04:18.911 ************************************ 00:04:18.911 00:04:18.911 real 1m28.513s 00:04:18.911 user 0m26.920s 00:04:18.911 sys 0m50.412s 00:04:18.911 21:48:03 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.911 21:48:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.911 ************************************ 00:04:18.911 END TEST setup.sh 00:04:18.911 ************************************ 00:04:18.911 21:48:03 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:22.207 Hugepages 00:04:22.208 node hugesize free / total 00:04:22.208 node0 1048576kB 0 / 0 00:04:22.208 node0 2048kB 1024 / 1024 00:04:22.208 node1 1048576kB 0 / 0 00:04:22.208 node1 2048kB 1024 / 1024 00:04:22.208 00:04:22.208 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.208 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:22.208 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:22.208 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:22.208 21:48:06 -- spdk/autotest.sh@117 -- # uname -s 00:04:22.208 21:48:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:22.208 21:48:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:22.208 21:48:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:25.502 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:25.502 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:25.503 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:25.503 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.881 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.141 21:48:11 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:28.080 21:48:12 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:28.080 21:48:12 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:28.080 21:48:12 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.080 21:48:12 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:28.080 21:48:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:28.080 21:48:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:28.080 21:48:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.080 21:48:12 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.080 21:48:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:28.080 21:48:12 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:28.080 21:48:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:28.080 21:48:12 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.372 Waiting for block devices as requested 00:04:31.372 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:31.631 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:31.631 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:31.631 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:31.890 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:31.890 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:31.890 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:32.149 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:32.149 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.149 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.408 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:32.408 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:32.408 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:32.667 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:32.667 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:32.667 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:32.927 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.927 21:48:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:32.927 21:48:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1485 -- # grep 0000:d8:00.0/nvme/nvme 00:04:32.927 21:48:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:32.927 21:48:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:32.927 21:48:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:32.927 21:48:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:32.927 21:48:17 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:32.927 21:48:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:32.927 21:48:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:32.927 21:48:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:32.927 21:48:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:32.927 21:48:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:32.927 21:48:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:32.927 21:48:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:32.927 21:48:17 -- common/autotest_common.sh@1541 -- # continue 00:04:32.927 21:48:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:32.927 21:48:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.927 21:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.186 21:48:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:33.186 21:48:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.186 21:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:33.186 21:48:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:36.476 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.476 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.736 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.736 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.736 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.736 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.116 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.116 21:48:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:38.116 21:48:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.116 21:48:22 -- common/autotest_common.sh@10 -- # set +x 00:04:38.376 21:48:22 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:38.376 21:48:22 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:38.376 21:48:22 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:38.376 21:48:22 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:38.376 21:48:22 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:38.376 21:48:22 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:38.376 21:48:22 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:38.376 21:48:22 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:38.376 21:48:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:38.376 21:48:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:38.376 21:48:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.376 21:48:22 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.376 21:48:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:38.376 21:48:22 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:38.376 21:48:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:d8:00.0 00:04:38.376 21:48:22 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:38.376 21:48:22 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:38.376 21:48:22 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:38.376 21:48:22 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:38.376 21:48:22 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:38.376 21:48:22 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:38.376 21:48:22 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:04:38.376 21:48:22 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:04:38.376 21:48:22 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1028156 00:04:38.376 21:48:22 -- common/autotest_common.sh@1583 -- # waitforlisten 1028156 00:04:38.376 21:48:22 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.376 21:48:22 -- common/autotest_common.sh@831 -- # '[' -z 1028156 ']' 00:04:38.376 21:48:22 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.376 21:48:22 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.376 21:48:22 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.376 21:48:22 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.376 21:48:22 -- common/autotest_common.sh@10 -- # set +x 00:04:38.376 [2024-09-30 21:48:22.676447] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:38.376 [2024-09-30 21:48:22.676515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028156 ] 00:04:38.376 [2024-09-30 21:48:22.743443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.636 [2024-09-30 21:48:22.821762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.895 21:48:23 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.895 21:48:23 -- common/autotest_common.sh@864 -- # return 0 00:04:38.895 21:48:23 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:38.895 21:48:23 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:38.895 21:48:23 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:42.188 nvme0n1 00:04:42.188 21:48:26 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:42.188 [2024-09-30 21:48:26.211973] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:42.188 request: 00:04:42.188 { 00:04:42.188 "nvme_ctrlr_name": "nvme0", 00:04:42.188 "password": "test", 00:04:42.188 "method": "bdev_nvme_opal_revert", 00:04:42.188 "req_id": 1 00:04:42.188 } 00:04:42.188 Got JSON-RPC error response 00:04:42.188 response: 00:04:42.188 { 00:04:42.188 "code": -32602, 00:04:42.188 "message": "Invalid parameters" 00:04:42.188 } 00:04:42.188 21:48:26 -- common/autotest_common.sh@1589 -- # true 00:04:42.188 21:48:26 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:42.188 21:48:26 -- common/autotest_common.sh@1593 -- # killprocess 1028156 00:04:42.188 21:48:26 -- common/autotest_common.sh@950 -- # '[' -z 1028156 ']' 00:04:42.188 21:48:26 -- common/autotest_common.sh@954 -- # kill -0 1028156 00:04:42.188 21:48:26 -- common/autotest_common.sh@955 -- # uname 00:04:42.188 21:48:26 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.188 21:48:26 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1028156 00:04:42.188 21:48:26 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.188 21:48:26 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.188 21:48:26 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1028156' 00:04:42.188 killing process with pid 1028156 00:04:42.188 21:48:26 -- common/autotest_common.sh@969 -- # kill 1028156 00:04:42.188 21:48:26 -- common/autotest_common.sh@974 -- # wait 1028156 00:04:44.095 21:48:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:44.095 21:48:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:44.095 21:48:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:44.095 21:48:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:44.095 21:48:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:44.095 21:48:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.095 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:44.095 21:48:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:44.095 21:48:28 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:44.095 21:48:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.095 21:48:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.095 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:04:44.355 ************************************ 00:04:44.355 START TEST env 00:04:44.355 ************************************ 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:04:44.355 * Looking for test storage... 00:04:44.355 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:44.355 21:48:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.355 21:48:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.355 21:48:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.355 21:48:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.355 21:48:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.355 21:48:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.355 21:48:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.355 21:48:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.355 21:48:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.355 21:48:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.355 21:48:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.355 21:48:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:44.355 21:48:28 env -- scripts/common.sh@345 -- # : 1 00:04:44.355 21:48:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.355 21:48:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.355 21:48:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:44.355 21:48:28 env -- scripts/common.sh@353 -- # local d=1 00:04:44.355 21:48:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.355 21:48:28 env -- scripts/common.sh@355 -- # echo 1 00:04:44.355 21:48:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.355 21:48:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:44.355 21:48:28 env -- scripts/common.sh@353 -- # local d=2 00:04:44.355 21:48:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.355 21:48:28 env -- scripts/common.sh@355 -- # echo 2 00:04:44.355 21:48:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.355 21:48:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.355 21:48:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.355 21:48:28 env -- scripts/common.sh@368 -- # return 0 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:44.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.355 --rc genhtml_branch_coverage=1 00:04:44.355 --rc genhtml_function_coverage=1 00:04:44.355 --rc genhtml_legend=1 00:04:44.355 --rc geninfo_all_blocks=1 00:04:44.355 --rc geninfo_unexecuted_blocks=1 00:04:44.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:44.355 ' 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:44.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.355 --rc genhtml_branch_coverage=1 00:04:44.355 --rc genhtml_function_coverage=1 00:04:44.355 --rc genhtml_legend=1 00:04:44.355 --rc geninfo_all_blocks=1 00:04:44.355 --rc geninfo_unexecuted_blocks=1 00:04:44.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:44.355 ' 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:44.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.355 --rc genhtml_branch_coverage=1 00:04:44.355 --rc genhtml_function_coverage=1 00:04:44.355 --rc genhtml_legend=1 00:04:44.355 --rc geninfo_all_blocks=1 00:04:44.355 --rc geninfo_unexecuted_blocks=1 00:04:44.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:44.355 ' 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:44.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.355 --rc genhtml_branch_coverage=1 00:04:44.355 --rc genhtml_function_coverage=1 00:04:44.355 --rc genhtml_legend=1 00:04:44.355 --rc geninfo_all_blocks=1 00:04:44.355 --rc geninfo_unexecuted_blocks=1 00:04:44.355 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:44.355 ' 00:04:44.355 21:48:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.355 21:48:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.355 21:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.355 ************************************ 00:04:44.355 START TEST env_memory 00:04:44.355 ************************************ 00:04:44.355 21:48:28 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.355 00:04:44.355 00:04:44.355 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.355 http://cunit.sourceforge.net/ 00:04:44.355 00:04:44.355 00:04:44.355 Suite: memory 00:04:44.614 Test: alloc and free memory map ...[2024-09-30 21:48:28.746432] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:44.614 passed 00:04:44.615 Test: mem map translation ...[2024-09-30 21:48:28.760137] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:44.615 [2024-09-30 21:48:28.760155] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:44.615 [2024-09-30 21:48:28.760189] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:44.615 [2024-09-30 21:48:28.760198] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.615 passed 00:04:44.615 Test: mem map registration ...[2024-09-30 21:48:28.781528] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:44.615 [2024-09-30 21:48:28.781545] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:44.615 passed 00:04:44.615 Test: mem map adjacent registrations ...passed 00:04:44.615 00:04:44.615 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.615 suites 1 1 n/a 0 0 00:04:44.615 tests 4 4 4 0 0 00:04:44.615 asserts 152 152 152 0 n/a 00:04:44.615 00:04:44.615 Elapsed time = 0.088 seconds 00:04:44.615 00:04:44.615 real 0m0.102s 00:04:44.615 user 0m0.089s 00:04:44.615 sys 0m0.012s 00:04:44.615 21:48:28 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.615 21:48:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.615 ************************************ 00:04:44.615 END TEST env_memory 00:04:44.615 ************************************ 00:04:44.615 21:48:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.615 21:48:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.615 21:48:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.615 21:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.615 ************************************ 00:04:44.615 START TEST env_vtophys 00:04:44.615 ************************************ 00:04:44.615 21:48:28 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.615 EAL: lib.eal log level changed from notice to debug 00:04:44.615 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.615 EAL: Detected lcore 1 as core 1 on socket 0 00:04:44.615 EAL: Detected lcore 2 as core 2 on socket 0 00:04:44.615 EAL: Detected lcore 3 as core 3 on socket 0 00:04:44.615 EAL: Detected lcore 4 as core 4 on socket 0 00:04:44.615 EAL: Detected lcore 5 as core 5 on socket 0 00:04:44.615 EAL: Detected lcore 6 as core 6 on socket 0 00:04:44.615 EAL: Detected lcore 7 as core 8 on socket 0 00:04:44.615 EAL: Detected lcore 8 as core 9 on socket 0 00:04:44.615 EAL: Detected lcore 9 as core 10 on socket 0 00:04:44.615 EAL: Detected lcore 10 as core 11 on socket 0 00:04:44.615 EAL: Detected lcore 11 as core 12 on socket 0 00:04:44.615 EAL: Detected lcore 12 as core 13 on socket 0 00:04:44.615 EAL: Detected lcore 13 as core 14 on socket 0 00:04:44.615 EAL: Detected lcore 14 as core 16 on socket 0 00:04:44.615 EAL: Detected lcore 15 as core 17 on socket 0 00:04:44.615 EAL: Detected lcore 16 as core 18 on socket 0 00:04:44.615 EAL: Detected lcore 17 as core 19 on socket 0 00:04:44.615 EAL: Detected lcore 18 as core 20 on socket 0 00:04:44.615 EAL: Detected lcore 19 as core 21 on socket 0 00:04:44.615 EAL: Detected lcore 20 as core 22 on socket 0 00:04:44.615 EAL: Detected lcore 21 as core 24 on socket 0 00:04:44.615 EAL: Detected lcore 22 as core 25 on socket 0 00:04:44.615 EAL: Detected lcore 23 as core 26 on socket 0 00:04:44.615 EAL: Detected lcore 24 as core 27 on socket 0 00:04:44.615 EAL: Detected lcore 25 as core 28 on socket 0 00:04:44.615 EAL: Detected lcore 26 as core 29 on socket 0 00:04:44.615 EAL: Detected lcore 27 as core 30 on socket 0 00:04:44.615 EAL: Detected lcore 28 as core 0 on socket 1 00:04:44.615 EAL: Detected lcore 29 as core 1 on socket 1 00:04:44.615 EAL: Detected lcore 30 as core 2 on socket 1 00:04:44.615 EAL: Detected lcore 31 as core 3 on socket 1 00:04:44.615 EAL: Detected lcore 32 as core 4 on socket 1 00:04:44.615 EAL: Detected lcore 33 as core 5 on socket 1 00:04:44.615 EAL: Detected lcore 34 as core 6 on socket 1 00:04:44.615 EAL: Detected lcore 35 as core 8 on socket 1 00:04:44.615 EAL: Detected lcore 36 as core 9 on socket 1 00:04:44.615 EAL: Detected lcore 37 as core 10 on socket 1 00:04:44.615 EAL: Detected lcore 38 as core 11 on socket 1 00:04:44.615 EAL: Detected lcore 39 as core 12 on socket 1 00:04:44.615 EAL: Detected lcore 40 as core 13 on socket 1 00:04:44.615 EAL: Detected lcore 41 as core 14 on socket 1 00:04:44.615 EAL: Detected lcore 42 as core 16 on socket 1 00:04:44.615 EAL: Detected lcore 43 as core 17 on socket 1 00:04:44.615 EAL: Detected lcore 44 as core 18 on socket 1 00:04:44.615 EAL: Detected lcore 45 as core 19 on socket 1 00:04:44.615 EAL: Detected lcore 46 as core 20 on socket 1 00:04:44.615 EAL: Detected lcore 47 as core 21 on socket 1 00:04:44.615 EAL: Detected lcore 48 as core 22 on socket 1 00:04:44.615 EAL: Detected lcore 49 as core 24 on socket 1 00:04:44.615 EAL: Detected lcore 50 as core 25 on socket 1 00:04:44.615 EAL: Detected lcore 51 as core 26 on socket 1 00:04:44.615 EAL: Detected lcore 52 as core 27 on socket 1 00:04:44.615 EAL: Detected lcore 53 as core 28 on socket 1 00:04:44.615 EAL: Detected lcore 54 as core 29 on socket 1 00:04:44.615 EAL: Detected lcore 55 as core 30 on socket 1 00:04:44.615 EAL: Detected lcore 56 as core 0 on socket 0 00:04:44.615 EAL: Detected lcore 57 as core 1 on socket 0 00:04:44.615 EAL: Detected lcore 58 as core 2 on socket 0 00:04:44.615 EAL: Detected lcore 59 as core 3 on socket 0 00:04:44.615 EAL: Detected lcore 60 as core 4 on socket 0 00:04:44.615 EAL: Detected lcore 61 as core 5 on socket 0 00:04:44.615 EAL: Detected lcore 62 as core 6 on socket 0 00:04:44.615 EAL: Detected lcore 63 as core 8 on socket 0 00:04:44.615 EAL: Detected lcore 64 as core 9 on socket 0 00:04:44.615 EAL: Detected lcore 65 as core 10 on socket 0 00:04:44.615 EAL: Detected lcore 66 as core 11 on socket 0 00:04:44.615 EAL: Detected lcore 67 as core 12 on socket 0 00:04:44.615 EAL: Detected lcore 68 as core 13 on socket 0 00:04:44.615 EAL: Detected lcore 69 as core 14 on socket 0 00:04:44.615 EAL: Detected lcore 70 as core 16 on socket 0 00:04:44.615 EAL: Detected lcore 71 as core 17 on socket 0 00:04:44.615 EAL: Detected lcore 72 as core 18 on socket 0 00:04:44.615 EAL: Detected lcore 73 as core 19 on socket 0 00:04:44.615 EAL: Detected lcore 74 as core 20 on socket 0 00:04:44.615 EAL: Detected lcore 75 as core 21 on socket 0 00:04:44.615 EAL: Detected lcore 76 as core 22 on socket 0 00:04:44.615 EAL: Detected lcore 77 as core 24 on socket 0 00:04:44.615 EAL: Detected lcore 78 as core 25 on socket 0 00:04:44.615 EAL: Detected lcore 79 as core 26 on socket 0 00:04:44.615 EAL: Detected lcore 80 as core 27 on socket 0 00:04:44.615 EAL: Detected lcore 81 as core 28 on socket 0 00:04:44.615 EAL: Detected lcore 82 as core 29 on socket 0 00:04:44.615 EAL: Detected lcore 83 as core 30 on socket 0 00:04:44.615 EAL: Detected lcore 84 as core 0 on socket 1 00:04:44.615 EAL: Detected lcore 85 as core 1 on socket 1 00:04:44.615 EAL: Detected lcore 86 as core 2 on socket 1 00:04:44.615 EAL: Detected lcore 87 as core 3 on socket 1 00:04:44.615 EAL: Detected lcore 88 as core 4 on socket 1 00:04:44.615 EAL: Detected lcore 89 as core 5 on socket 1 00:04:44.615 EAL: Detected lcore 90 as core 6 on socket 1 00:04:44.615 EAL: Detected lcore 91 as core 8 on socket 1 00:04:44.615 EAL: Detected lcore 92 as core 9 on socket 1 00:04:44.615 EAL: Detected lcore 93 as core 10 on socket 1 00:04:44.615 EAL: Detected lcore 94 as core 11 on socket 1 00:04:44.615 EAL: Detected lcore 95 as core 12 on socket 1 00:04:44.615 EAL: Detected lcore 96 as core 13 on socket 1 00:04:44.615 EAL: Detected lcore 97 as core 14 on socket 1 00:04:44.615 EAL: Detected lcore 98 as core 16 on socket 1 00:04:44.615 EAL: Detected lcore 99 as core 17 on socket 1 00:04:44.615 EAL: Detected lcore 100 as core 18 on socket 1 00:04:44.615 EAL: Detected lcore 101 as core 19 on socket 1 00:04:44.615 EAL: Detected lcore 102 as core 20 on socket 1 00:04:44.615 EAL: Detected lcore 103 as core 21 on socket 1 00:04:44.615 EAL: Detected lcore 104 as core 22 on socket 1 00:04:44.615 EAL: Detected lcore 105 as core 24 on socket 1 00:04:44.615 EAL: Detected lcore 106 as core 25 on socket 1 00:04:44.615 EAL: Detected lcore 107 as core 26 on socket 1 00:04:44.615 EAL: Detected lcore 108 as core 27 on socket 1 00:04:44.615 EAL: Detected lcore 109 as core 28 on socket 1 00:04:44.615 EAL: Detected lcore 110 as core 29 on socket 1 00:04:44.615 EAL: Detected lcore 111 as core 30 on socket 1 00:04:44.615 EAL: Maximum logical cores by configuration: 128 00:04:44.615 EAL: Detected CPU lcores: 112 00:04:44.615 EAL: Detected NUMA nodes: 2 00:04:44.615 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:44.615 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:44.615 EAL: Checking presence of .so 'librte_eal.so' 00:04:44.615 EAL: Detected static linkage of DPDK 00:04:44.615 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.615 EAL: Bus pci wants IOVA as 'DC' 00:04:44.615 EAL: Buses did not request a specific IOVA mode. 00:04:44.615 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:44.615 EAL: Selected IOVA mode 'VA' 00:04:44.615 EAL: Probing VFIO support... 00:04:44.615 EAL: IOMMU type 1 (Type 1) is supported 00:04:44.615 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:44.615 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:44.615 EAL: VFIO support initialized 00:04:44.615 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.615 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.615 EAL: Setting up physically contiguous memory... 00:04:44.615 EAL: Setting maximum number of open files to 524288 00:04:44.615 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.615 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:44.615 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.615 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:44.615 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.615 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:44.615 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.615 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.615 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:44.615 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:44.615 EAL: Hugepages will be freed exactly as allocated. 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: TSC frequency is ~2500000 KHz 00:04:44.615 EAL: Main lcore 0 is ready (tid=7fd65a971a00;cpuset=[0]) 00:04:44.615 EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.615 EAL: Restoring previous memory policy: 0 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.615 00:04:44.615 00:04:44.615 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.615 http://cunit.sourceforge.net/ 00:04:44.615 00:04:44.615 00:04:44.615 Suite: components_suite 00:04:44.615 Test: vtophys_malloc_test ...passed 00:04:44.615 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.615 EAL: Restoring previous memory policy: 4 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.615 EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.615 EAL: Restoring previous memory policy: 4 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.615 EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.615 EAL: Restoring previous memory policy: 4 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.615 EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.615 EAL: Restoring previous memory policy: 4 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.615 EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.615 EAL: Restoring previous memory policy: 4 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.615 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.615 EAL: request: mp_malloc_sync 00:04:44.615 EAL: No shared files mode enabled, IPC is disabled 00:04:44.615 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.615 EAL: Trying to obtain current memory policy. 00:04:44.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.874 EAL: Restoring previous memory policy: 4 00:04:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.874 EAL: request: mp_malloc_sync 00:04:44.874 EAL: No shared files mode enabled, IPC is disabled 00:04:44.874 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.874 EAL: request: mp_malloc_sync 00:04:44.874 EAL: No shared files mode enabled, IPC is disabled 00:04:44.874 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.874 EAL: Trying to obtain current memory policy. 00:04:44.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.874 EAL: Restoring previous memory policy: 4 00:04:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.874 EAL: request: mp_malloc_sync 00:04:44.874 EAL: No shared files mode enabled, IPC is disabled 00:04:44.874 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.874 EAL: request: mp_malloc_sync 00:04:44.874 EAL: No shared files mode enabled, IPC is disabled 00:04:44.874 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.874 EAL: Trying to obtain current memory policy. 00:04:44.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.874 EAL: Restoring previous memory policy: 4 00:04:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.874 EAL: request: mp_malloc_sync 00:04:44.874 EAL: No shared files mode enabled, IPC is disabled 00:04:44.874 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.874 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.874 EAL: request: mp_malloc_sync 00:04:44.874 EAL: No shared files mode enabled, IPC is disabled 00:04:44.874 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.874 EAL: Trying to obtain current memory policy. 00:04:44.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.133 EAL: Restoring previous memory policy: 4 00:04:45.133 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.133 EAL: request: mp_malloc_sync 00:04:45.133 EAL: No shared files mode enabled, IPC is disabled 00:04:45.133 EAL: Heap on socket 0 was expanded by 514MB 00:04:45.133 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.133 EAL: request: mp_malloc_sync 00:04:45.133 EAL: No shared files mode enabled, IPC is disabled 00:04:45.133 EAL: Heap on socket 0 was shrunk by 514MB 00:04:45.133 EAL: Trying to obtain current memory policy. 00:04:45.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.393 EAL: Restoring previous memory policy: 4 00:04:45.393 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.393 EAL: request: mp_malloc_sync 00:04:45.393 EAL: No shared files mode enabled, IPC is disabled 00:04:45.393 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.653 EAL: request: mp_malloc_sync 00:04:45.653 EAL: No shared files mode enabled, IPC is disabled 00:04:45.653 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.653 passed 00:04:45.653 00:04:45.653 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.653 suites 1 1 n/a 0 0 00:04:45.653 tests 2 2 2 0 0 00:04:45.653 asserts 497 497 497 0 n/a 00:04:45.653 00:04:45.653 Elapsed time = 0.955 seconds 00:04:45.653 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.653 EAL: request: mp_malloc_sync 00:04:45.653 EAL: No shared files mode enabled, IPC is disabled 00:04:45.653 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.653 EAL: No shared files mode enabled, IPC is disabled 00:04:45.653 EAL: No shared files mode enabled, IPC is disabled 00:04:45.653 EAL: No shared files mode enabled, IPC is disabled 00:04:45.653 00:04:45.653 real 0m1.066s 00:04:45.653 user 0m0.619s 00:04:45.653 sys 0m0.427s 00:04:45.653 21:48:29 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.653 21:48:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:45.653 ************************************ 00:04:45.653 END TEST env_vtophys 00:04:45.653 ************************************ 00:04:45.653 21:48:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.653 21:48:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.653 21:48:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.653 21:48:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.912 ************************************ 00:04:45.912 START TEST env_pci 00:04:45.912 ************************************ 00:04:45.912 21:48:30 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.912 00:04:45.912 00:04:45.912 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.912 http://cunit.sourceforge.net/ 00:04:45.912 00:04:45.912 00:04:45.912 Suite: pci 00:04:45.912 Test: pci_hook ...[2024-09-30 21:48:30.037230] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1029443 has claimed it 00:04:45.912 EAL: Cannot find device (10000:00:01.0) 00:04:45.912 EAL: Failed to attach device on primary process 00:04:45.912 passed 00:04:45.912 00:04:45.912 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.912 suites 1 1 n/a 0 0 00:04:45.912 tests 1 1 1 0 0 00:04:45.912 asserts 25 25 25 0 n/a 00:04:45.912 00:04:45.912 Elapsed time = 0.033 seconds 00:04:45.912 00:04:45.912 real 0m0.053s 00:04:45.912 user 0m0.019s 00:04:45.912 sys 0m0.034s 00:04:45.912 21:48:30 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.912 21:48:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:45.912 ************************************ 00:04:45.912 END TEST env_pci 00:04:45.912 ************************************ 00:04:45.912 21:48:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.912 21:48:30 env -- env/env.sh@15 -- # uname 00:04:45.912 21:48:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.912 21:48:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.912 21:48:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.912 21:48:30 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:45.912 21:48:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.912 21:48:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.912 ************************************ 00:04:45.912 START TEST env_dpdk_post_init 00:04:45.912 ************************************ 00:04:45.912 21:48:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.912 EAL: Detected CPU lcores: 112 00:04:45.912 EAL: Detected NUMA nodes: 2 00:04:45.912 EAL: Detected static linkage of DPDK 00:04:45.912 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.912 EAL: Selected IOVA mode 'VA' 00:04:45.912 EAL: VFIO support initialized 00:04:45.912 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.170 EAL: Using IOMMU type 1 (Type 1) 00:04:46.740 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:51.069 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:51.069 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:04:51.069 Starting DPDK initialization... 00:04:51.069 Starting SPDK post initialization... 00:04:51.069 SPDK NVMe probe 00:04:51.069 Attaching to 0000:d8:00.0 00:04:51.069 Attached to 0000:d8:00.0 00:04:51.069 Cleaning up... 00:04:51.069 00:04:51.069 real 0m4.750s 00:04:51.069 user 0m3.345s 00:04:51.069 sys 0m0.652s 00:04:51.069 21:48:34 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.069 21:48:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.069 ************************************ 00:04:51.069 END TEST env_dpdk_post_init 00:04:51.069 ************************************ 00:04:51.069 21:48:34 env -- env/env.sh@26 -- # uname 00:04:51.069 21:48:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.069 21:48:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.069 21:48:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.069 21:48:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.069 21:48:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.069 ************************************ 00:04:51.069 START TEST env_mem_callbacks 00:04:51.069 ************************************ 00:04:51.069 21:48:35 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.069 EAL: Detected CPU lcores: 112 00:04:51.069 EAL: Detected NUMA nodes: 2 00:04:51.069 EAL: Detected static linkage of DPDK 00:04:51.069 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.069 EAL: Selected IOVA mode 'VA' 00:04:51.069 EAL: VFIO support initialized 00:04:51.069 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.069 00:04:51.069 00:04:51.069 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.069 http://cunit.sourceforge.net/ 00:04:51.069 00:04:51.069 00:04:51.069 Suite: memory 00:04:51.069 Test: test ... 00:04:51.069 register 0x200000200000 2097152 00:04:51.069 malloc 3145728 00:04:51.069 register 0x200000400000 4194304 00:04:51.069 buf 0x200000500000 len 3145728 PASSED 00:04:51.069 malloc 64 00:04:51.069 buf 0x2000004fff40 len 64 PASSED 00:04:51.069 malloc 4194304 00:04:51.069 register 0x200000800000 6291456 00:04:51.069 buf 0x200000a00000 len 4194304 PASSED 00:04:51.069 free 0x200000500000 3145728 00:04:51.069 free 0x2000004fff40 64 00:04:51.069 unregister 0x200000400000 4194304 PASSED 00:04:51.069 free 0x200000a00000 4194304 00:04:51.069 unregister 0x200000800000 6291456 PASSED 00:04:51.069 malloc 8388608 00:04:51.069 register 0x200000400000 10485760 00:04:51.069 buf 0x200000600000 len 8388608 PASSED 00:04:51.069 free 0x200000600000 8388608 00:04:51.069 unregister 0x200000400000 10485760 PASSED 00:04:51.069 passed 00:04:51.069 00:04:51.069 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.069 suites 1 1 n/a 0 0 00:04:51.069 tests 1 1 1 0 0 00:04:51.069 asserts 15 15 15 0 n/a 00:04:51.069 00:04:51.069 Elapsed time = 0.005 seconds 00:04:51.069 00:04:51.069 real 0m0.065s 00:04:51.069 user 0m0.022s 00:04:51.069 sys 0m0.043s 00:04:51.069 21:48:35 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.069 21:48:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.069 ************************************ 00:04:51.069 END TEST env_mem_callbacks 00:04:51.069 ************************************ 00:04:51.069 00:04:51.069 real 0m6.637s 00:04:51.069 user 0m4.340s 00:04:51.069 sys 0m1.571s 00:04:51.069 21:48:35 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.069 21:48:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.069 ************************************ 00:04:51.069 END TEST env 00:04:51.070 ************************************ 00:04:51.070 21:48:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.070 21:48:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.070 21:48:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.070 21:48:35 -- common/autotest_common.sh@10 -- # set +x 00:04:51.070 ************************************ 00:04:51.070 START TEST rpc 00:04:51.070 ************************************ 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.070 * Looking for test storage... 00:04:51.070 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.070 21:48:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.070 21:48:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.070 21:48:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.070 21:48:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.070 21:48:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.070 21:48:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:51.070 21:48:35 rpc -- scripts/common.sh@345 -- # : 1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.070 21:48:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.070 21:48:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@353 -- # local d=1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.070 21:48:35 rpc -- scripts/common.sh@355 -- # echo 1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.070 21:48:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@353 -- # local d=2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.070 21:48:35 rpc -- scripts/common.sh@355 -- # echo 2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.070 21:48:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.070 21:48:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.070 21:48:35 rpc -- scripts/common.sh@368 -- # return 0 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.070 --rc genhtml_branch_coverage=1 00:04:51.070 --rc genhtml_function_coverage=1 00:04:51.070 --rc genhtml_legend=1 00:04:51.070 --rc geninfo_all_blocks=1 00:04:51.070 --rc geninfo_unexecuted_blocks=1 00:04:51.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.070 ' 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.070 --rc genhtml_branch_coverage=1 00:04:51.070 --rc genhtml_function_coverage=1 00:04:51.070 --rc genhtml_legend=1 00:04:51.070 --rc geninfo_all_blocks=1 00:04:51.070 --rc geninfo_unexecuted_blocks=1 00:04:51.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.070 ' 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.070 --rc genhtml_branch_coverage=1 00:04:51.070 --rc genhtml_function_coverage=1 00:04:51.070 --rc genhtml_legend=1 00:04:51.070 --rc geninfo_all_blocks=1 00:04:51.070 --rc geninfo_unexecuted_blocks=1 00:04:51.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.070 ' 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:51.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.070 --rc genhtml_branch_coverage=1 00:04:51.070 --rc genhtml_function_coverage=1 00:04:51.070 --rc genhtml_legend=1 00:04:51.070 --rc geninfo_all_blocks=1 00:04:51.070 --rc geninfo_unexecuted_blocks=1 00:04:51.070 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:51.070 ' 00:04:51.070 21:48:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1030621 00:04:51.070 21:48:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.070 21:48:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1030621 00:04:51.070 21:48:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@831 -- # '[' -z 1030621 ']' 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.070 21:48:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.070 [2024-09-30 21:48:35.398541] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:51.070 [2024-09-30 21:48:35.398596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030621 ] 00:04:51.330 [2024-09-30 21:48:35.463234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.330 [2024-09-30 21:48:35.541745] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.330 [2024-09-30 21:48:35.541786] app.c: 614:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1030621' to capture a snapshot of events at runtime. 00:04:51.330 [2024-09-30 21:48:35.541795] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.330 [2024-09-30 21:48:35.541804] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.330 [2024-09-30 21:48:35.541811] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1030621 for offline analysis/debug. 00:04:51.330 [2024-09-30 21:48:35.541835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.589 21:48:35 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.589 21:48:35 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:51.589 21:48:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:51.590 21:48:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:51.590 21:48:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:51.590 21:48:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:51.590 21:48:35 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.590 21:48:35 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.590 21:48:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.590 ************************************ 00:04:51.590 START TEST rpc_integrity 00:04:51.590 ************************************ 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.590 { 00:04:51.590 "name": "Malloc0", 00:04:51.590 "aliases": [ 00:04:51.590 "2150ad15-2169-4928-996f-1ebac2cfaf46" 00:04:51.590 ], 00:04:51.590 "product_name": "Malloc disk", 00:04:51.590 "block_size": 512, 00:04:51.590 "num_blocks": 16384, 00:04:51.590 "uuid": "2150ad15-2169-4928-996f-1ebac2cfaf46", 00:04:51.590 "assigned_rate_limits": { 00:04:51.590 "rw_ios_per_sec": 0, 00:04:51.590 "rw_mbytes_per_sec": 0, 00:04:51.590 "r_mbytes_per_sec": 0, 00:04:51.590 "w_mbytes_per_sec": 0 00:04:51.590 }, 00:04:51.590 "claimed": false, 00:04:51.590 "zoned": false, 00:04:51.590 "supported_io_types": { 00:04:51.590 "read": true, 00:04:51.590 "write": true, 00:04:51.590 "unmap": true, 00:04:51.590 "flush": true, 00:04:51.590 "reset": true, 00:04:51.590 "nvme_admin": false, 00:04:51.590 "nvme_io": false, 00:04:51.590 "nvme_io_md": false, 00:04:51.590 "write_zeroes": true, 00:04:51.590 "zcopy": true, 00:04:51.590 "get_zone_info": false, 00:04:51.590 "zone_management": false, 00:04:51.590 "zone_append": false, 00:04:51.590 "compare": false, 00:04:51.590 "compare_and_write": false, 00:04:51.590 "abort": true, 00:04:51.590 "seek_hole": false, 00:04:51.590 "seek_data": false, 00:04:51.590 "copy": true, 00:04:51.590 "nvme_iov_md": false 00:04:51.590 }, 00:04:51.590 "memory_domains": [ 00:04:51.590 { 00:04:51.590 "dma_device_id": "system", 00:04:51.590 "dma_device_type": 1 00:04:51.590 }, 00:04:51.590 { 00:04:51.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.590 "dma_device_type": 2 00:04:51.590 } 00:04:51.590 ], 00:04:51.590 "driver_specific": {} 00:04:51.590 } 00:04:51.590 ]' 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.590 [2024-09-30 21:48:35.915731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:51.590 [2024-09-30 21:48:35.915763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.590 [2024-09-30 21:48:35.915779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x602b790 00:04:51.590 [2024-09-30 21:48:35.915788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.590 [2024-09-30 21:48:35.916676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.590 [2024-09-30 21:48:35.916699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.590 Passthru0 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.590 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.590 { 00:04:51.590 "name": "Malloc0", 00:04:51.590 "aliases": [ 00:04:51.590 "2150ad15-2169-4928-996f-1ebac2cfaf46" 00:04:51.590 ], 00:04:51.590 "product_name": "Malloc disk", 00:04:51.590 "block_size": 512, 00:04:51.590 "num_blocks": 16384, 00:04:51.590 "uuid": "2150ad15-2169-4928-996f-1ebac2cfaf46", 00:04:51.590 "assigned_rate_limits": { 00:04:51.590 "rw_ios_per_sec": 0, 00:04:51.590 "rw_mbytes_per_sec": 0, 00:04:51.590 "r_mbytes_per_sec": 0, 00:04:51.590 "w_mbytes_per_sec": 0 00:04:51.590 }, 00:04:51.590 "claimed": true, 00:04:51.590 "claim_type": "exclusive_write", 00:04:51.590 "zoned": false, 00:04:51.590 "supported_io_types": { 00:04:51.590 "read": true, 00:04:51.590 "write": true, 00:04:51.590 "unmap": true, 00:04:51.590 "flush": true, 00:04:51.590 "reset": true, 00:04:51.590 "nvme_admin": false, 00:04:51.590 "nvme_io": false, 00:04:51.590 "nvme_io_md": false, 00:04:51.590 "write_zeroes": true, 00:04:51.590 "zcopy": true, 00:04:51.590 "get_zone_info": false, 00:04:51.590 "zone_management": false, 00:04:51.590 "zone_append": false, 00:04:51.590 "compare": false, 00:04:51.590 "compare_and_write": false, 00:04:51.590 "abort": true, 00:04:51.590 "seek_hole": false, 00:04:51.590 "seek_data": false, 00:04:51.590 "copy": true, 00:04:51.590 "nvme_iov_md": false 00:04:51.590 }, 00:04:51.590 "memory_domains": [ 00:04:51.590 { 00:04:51.590 "dma_device_id": "system", 00:04:51.590 "dma_device_type": 1 00:04:51.590 }, 00:04:51.590 { 00:04:51.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.590 "dma_device_type": 2 00:04:51.590 } 00:04:51.590 ], 00:04:51.590 "driver_specific": {} 00:04:51.590 }, 00:04:51.590 { 00:04:51.590 "name": "Passthru0", 00:04:51.590 "aliases": [ 00:04:51.590 "248260fe-4843-50fa-94e5-96ae450f6462" 00:04:51.590 ], 00:04:51.590 "product_name": "passthru", 00:04:51.590 "block_size": 512, 00:04:51.590 "num_blocks": 16384, 00:04:51.590 "uuid": "248260fe-4843-50fa-94e5-96ae450f6462", 00:04:51.590 "assigned_rate_limits": { 00:04:51.590 "rw_ios_per_sec": 0, 00:04:51.590 "rw_mbytes_per_sec": 0, 00:04:51.590 "r_mbytes_per_sec": 0, 00:04:51.590 "w_mbytes_per_sec": 0 00:04:51.590 }, 00:04:51.590 "claimed": false, 00:04:51.590 "zoned": false, 00:04:51.590 "supported_io_types": { 00:04:51.590 "read": true, 00:04:51.590 "write": true, 00:04:51.590 "unmap": true, 00:04:51.590 "flush": true, 00:04:51.590 "reset": true, 00:04:51.590 "nvme_admin": false, 00:04:51.590 "nvme_io": false, 00:04:51.590 "nvme_io_md": false, 00:04:51.590 "write_zeroes": true, 00:04:51.590 "zcopy": true, 00:04:51.590 "get_zone_info": false, 00:04:51.590 "zone_management": false, 00:04:51.590 "zone_append": false, 00:04:51.590 "compare": false, 00:04:51.590 "compare_and_write": false, 00:04:51.590 "abort": true, 00:04:51.590 "seek_hole": false, 00:04:51.590 "seek_data": false, 00:04:51.590 "copy": true, 00:04:51.590 "nvme_iov_md": false 00:04:51.590 }, 00:04:51.590 "memory_domains": [ 00:04:51.590 { 00:04:51.590 "dma_device_id": "system", 00:04:51.590 "dma_device_type": 1 00:04:51.590 }, 00:04:51.590 { 00:04:51.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.590 "dma_device_type": 2 00:04:51.590 } 00:04:51.590 ], 00:04:51.590 "driver_specific": { 00:04:51.590 "passthru": { 00:04:51.590 "name": "Passthru0", 00:04:51.590 "base_bdev_name": "Malloc0" 00:04:51.590 } 00:04:51.590 } 00:04:51.590 } 00:04:51.590 ]' 00:04:51.590 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.849 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.849 21:48:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.849 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.849 21:48:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.849 21:48:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.849 21:48:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.849 21:48:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.849 21:48:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.849 21:48:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.849 00:04:51.849 real 0m0.289s 00:04:51.849 user 0m0.170s 00:04:51.849 sys 0m0.053s 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.849 21:48:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 ************************************ 00:04:51.849 END TEST rpc_integrity 00:04:51.849 ************************************ 00:04:51.849 21:48:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:51.849 21:48:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.849 21:48:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.849 21:48:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 ************************************ 00:04:51.849 START TEST rpc_plugins 00:04:51.849 ************************************ 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:51.849 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.849 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.849 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.849 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.849 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.849 { 00:04:51.849 "name": "Malloc1", 00:04:51.849 "aliases": [ 00:04:51.849 "bd70e3b4-e4aa-4017-8c62-e7a6a9e17bd5" 00:04:51.849 ], 00:04:51.849 "product_name": "Malloc disk", 00:04:51.849 "block_size": 4096, 00:04:51.849 "num_blocks": 256, 00:04:51.849 "uuid": "bd70e3b4-e4aa-4017-8c62-e7a6a9e17bd5", 00:04:51.849 "assigned_rate_limits": { 00:04:51.849 "rw_ios_per_sec": 0, 00:04:51.849 "rw_mbytes_per_sec": 0, 00:04:51.849 "r_mbytes_per_sec": 0, 00:04:51.849 "w_mbytes_per_sec": 0 00:04:51.849 }, 00:04:51.849 "claimed": false, 00:04:51.849 "zoned": false, 00:04:51.849 "supported_io_types": { 00:04:51.849 "read": true, 00:04:51.849 "write": true, 00:04:51.849 "unmap": true, 00:04:51.849 "flush": true, 00:04:51.849 "reset": true, 00:04:51.849 "nvme_admin": false, 00:04:51.849 "nvme_io": false, 00:04:51.849 "nvme_io_md": false, 00:04:51.849 "write_zeroes": true, 00:04:51.849 "zcopy": true, 00:04:51.849 "get_zone_info": false, 00:04:51.849 "zone_management": false, 00:04:51.849 "zone_append": false, 00:04:51.849 "compare": false, 00:04:51.849 "compare_and_write": false, 00:04:51.849 "abort": true, 00:04:51.849 "seek_hole": false, 00:04:51.849 "seek_data": false, 00:04:51.849 "copy": true, 00:04:51.849 "nvme_iov_md": false 00:04:51.849 }, 00:04:51.849 "memory_domains": [ 00:04:51.849 { 00:04:51.849 "dma_device_id": "system", 00:04:51.849 "dma_device_type": 1 00:04:51.849 }, 00:04:51.849 { 00:04:51.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.849 "dma_device_type": 2 00:04:51.849 } 00:04:51.849 ], 00:04:51.849 "driver_specific": {} 00:04:51.849 } 00:04:51.849 ]' 00:04:51.849 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.108 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.108 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.108 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.108 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.108 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.108 21:48:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.108 00:04:52.108 real 0m0.130s 00:04:52.108 user 0m0.082s 00:04:52.108 sys 0m0.014s 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.108 21:48:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.108 ************************************ 00:04:52.108 END TEST rpc_plugins 00:04:52.108 ************************************ 00:04:52.108 21:48:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:52.108 21:48:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.108 21:48:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.108 21:48:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.108 ************************************ 00:04:52.108 START TEST rpc_trace_cmd_test 00:04:52.108 ************************************ 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:52.108 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1030621", 00:04:52.108 "tpoint_group_mask": "0x8", 00:04:52.108 "iscsi_conn": { 00:04:52.108 "mask": "0x2", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "scsi": { 00:04:52.108 "mask": "0x4", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "bdev": { 00:04:52.108 "mask": "0x8", 00:04:52.108 "tpoint_mask": "0xffffffffffffffff" 00:04:52.108 }, 00:04:52.108 "nvmf_rdma": { 00:04:52.108 "mask": "0x10", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "nvmf_tcp": { 00:04:52.108 "mask": "0x20", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "ftl": { 00:04:52.108 "mask": "0x40", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "blobfs": { 00:04:52.108 "mask": "0x80", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "dsa": { 00:04:52.108 "mask": "0x200", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "thread": { 00:04:52.108 "mask": "0x400", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "nvme_pcie": { 00:04:52.108 "mask": "0x800", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "iaa": { 00:04:52.108 "mask": "0x1000", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "nvme_tcp": { 00:04:52.108 "mask": "0x2000", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "bdev_nvme": { 00:04:52.108 "mask": "0x4000", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "sock": { 00:04:52.108 "mask": "0x8000", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "blob": { 00:04:52.108 "mask": "0x10000", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 }, 00:04:52.108 "bdev_raid": { 00:04:52.108 "mask": "0x20000", 00:04:52.108 "tpoint_mask": "0x0" 00:04:52.108 } 00:04:52.108 }' 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:52.108 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.367 00:04:52.367 real 0m0.228s 00:04:52.367 user 0m0.183s 00:04:52.367 sys 0m0.036s 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.367 21:48:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.367 ************************************ 00:04:52.367 END TEST rpc_trace_cmd_test 00:04:52.367 ************************************ 00:04:52.367 21:48:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.367 21:48:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.367 21:48:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.367 21:48:36 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.367 21:48:36 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.367 21:48:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.367 ************************************ 00:04:52.367 START TEST rpc_daemon_integrity 00:04:52.367 ************************************ 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.367 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.627 { 00:04:52.627 "name": "Malloc2", 00:04:52.627 "aliases": [ 00:04:52.627 "791ac013-e373-4bab-a6e7-ba364e3cbed3" 00:04:52.627 ], 00:04:52.627 "product_name": "Malloc disk", 00:04:52.627 "block_size": 512, 00:04:52.627 "num_blocks": 16384, 00:04:52.627 "uuid": "791ac013-e373-4bab-a6e7-ba364e3cbed3", 00:04:52.627 "assigned_rate_limits": { 00:04:52.627 "rw_ios_per_sec": 0, 00:04:52.627 "rw_mbytes_per_sec": 0, 00:04:52.627 "r_mbytes_per_sec": 0, 00:04:52.627 "w_mbytes_per_sec": 0 00:04:52.627 }, 00:04:52.627 "claimed": false, 00:04:52.627 "zoned": false, 00:04:52.627 "supported_io_types": { 00:04:52.627 "read": true, 00:04:52.627 "write": true, 00:04:52.627 "unmap": true, 00:04:52.627 "flush": true, 00:04:52.627 "reset": true, 00:04:52.627 "nvme_admin": false, 00:04:52.627 "nvme_io": false, 00:04:52.627 "nvme_io_md": false, 00:04:52.627 "write_zeroes": true, 00:04:52.627 "zcopy": true, 00:04:52.627 "get_zone_info": false, 00:04:52.627 "zone_management": false, 00:04:52.627 "zone_append": false, 00:04:52.627 "compare": false, 00:04:52.627 "compare_and_write": false, 00:04:52.627 "abort": true, 00:04:52.627 "seek_hole": false, 00:04:52.627 "seek_data": false, 00:04:52.627 "copy": true, 00:04:52.627 "nvme_iov_md": false 00:04:52.627 }, 00:04:52.627 "memory_domains": [ 00:04:52.627 { 00:04:52.627 "dma_device_id": "system", 00:04:52.627 "dma_device_type": 1 00:04:52.627 }, 00:04:52.627 { 00:04:52.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.627 "dma_device_type": 2 00:04:52.627 } 00:04:52.627 ], 00:04:52.627 "driver_specific": {} 00:04:52.627 } 00:04:52.627 ]' 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.627 [2024-09-30 21:48:36.785962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.627 [2024-09-30 21:48:36.785992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.627 [2024-09-30 21:48:36.786007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x614cc50 00:04:52.627 [2024-09-30 21:48:36.786016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.627 [2024-09-30 21:48:36.786750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.627 [2024-09-30 21:48:36.786772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.627 Passthru0 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.627 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.627 { 00:04:52.627 "name": "Malloc2", 00:04:52.627 "aliases": [ 00:04:52.627 "791ac013-e373-4bab-a6e7-ba364e3cbed3" 00:04:52.627 ], 00:04:52.627 "product_name": "Malloc disk", 00:04:52.627 "block_size": 512, 00:04:52.627 "num_blocks": 16384, 00:04:52.627 "uuid": "791ac013-e373-4bab-a6e7-ba364e3cbed3", 00:04:52.627 "assigned_rate_limits": { 00:04:52.627 "rw_ios_per_sec": 0, 00:04:52.627 "rw_mbytes_per_sec": 0, 00:04:52.627 "r_mbytes_per_sec": 0, 00:04:52.627 "w_mbytes_per_sec": 0 00:04:52.627 }, 00:04:52.627 "claimed": true, 00:04:52.627 "claim_type": "exclusive_write", 00:04:52.627 "zoned": false, 00:04:52.627 "supported_io_types": { 00:04:52.627 "read": true, 00:04:52.627 "write": true, 00:04:52.627 "unmap": true, 00:04:52.627 "flush": true, 00:04:52.627 "reset": true, 00:04:52.627 "nvme_admin": false, 00:04:52.627 "nvme_io": false, 00:04:52.627 "nvme_io_md": false, 00:04:52.627 "write_zeroes": true, 00:04:52.627 "zcopy": true, 00:04:52.627 "get_zone_info": false, 00:04:52.627 "zone_management": false, 00:04:52.627 "zone_append": false, 00:04:52.627 "compare": false, 00:04:52.627 "compare_and_write": false, 00:04:52.627 "abort": true, 00:04:52.627 "seek_hole": false, 00:04:52.627 "seek_data": false, 00:04:52.627 "copy": true, 00:04:52.627 "nvme_iov_md": false 00:04:52.627 }, 00:04:52.627 "memory_domains": [ 00:04:52.627 { 00:04:52.627 "dma_device_id": "system", 00:04:52.627 "dma_device_type": 1 00:04:52.627 }, 00:04:52.627 { 00:04:52.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.627 "dma_device_type": 2 00:04:52.627 } 00:04:52.627 ], 00:04:52.627 "driver_specific": {} 00:04:52.627 }, 00:04:52.627 { 00:04:52.627 "name": "Passthru0", 00:04:52.627 "aliases": [ 00:04:52.627 "042acfaf-686f-58f1-be46-b800c720a41c" 00:04:52.627 ], 00:04:52.627 "product_name": "passthru", 00:04:52.627 "block_size": 512, 00:04:52.627 "num_blocks": 16384, 00:04:52.627 "uuid": "042acfaf-686f-58f1-be46-b800c720a41c", 00:04:52.627 "assigned_rate_limits": { 00:04:52.627 "rw_ios_per_sec": 0, 00:04:52.627 "rw_mbytes_per_sec": 0, 00:04:52.627 "r_mbytes_per_sec": 0, 00:04:52.627 "w_mbytes_per_sec": 0 00:04:52.627 }, 00:04:52.627 "claimed": false, 00:04:52.627 "zoned": false, 00:04:52.627 "supported_io_types": { 00:04:52.627 "read": true, 00:04:52.627 "write": true, 00:04:52.627 "unmap": true, 00:04:52.627 "flush": true, 00:04:52.627 "reset": true, 00:04:52.627 "nvme_admin": false, 00:04:52.627 "nvme_io": false, 00:04:52.627 "nvme_io_md": false, 00:04:52.627 "write_zeroes": true, 00:04:52.627 "zcopy": true, 00:04:52.627 "get_zone_info": false, 00:04:52.627 "zone_management": false, 00:04:52.627 "zone_append": false, 00:04:52.628 "compare": false, 00:04:52.628 "compare_and_write": false, 00:04:52.628 "abort": true, 00:04:52.628 "seek_hole": false, 00:04:52.628 "seek_data": false, 00:04:52.628 "copy": true, 00:04:52.628 "nvme_iov_md": false 00:04:52.628 }, 00:04:52.628 "memory_domains": [ 00:04:52.628 { 00:04:52.628 "dma_device_id": "system", 00:04:52.628 "dma_device_type": 1 00:04:52.628 }, 00:04:52.628 { 00:04:52.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.628 "dma_device_type": 2 00:04:52.628 } 00:04:52.628 ], 00:04:52.628 "driver_specific": { 00:04:52.628 "passthru": { 00:04:52.628 "name": "Passthru0", 00:04:52.628 "base_bdev_name": "Malloc2" 00:04:52.628 } 00:04:52.628 } 00:04:52.628 } 00:04:52.628 ]' 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.628 00:04:52.628 real 0m0.265s 00:04:52.628 user 0m0.170s 00:04:52.628 sys 0m0.034s 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.628 21:48:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.628 ************************************ 00:04:52.628 END TEST rpc_daemon_integrity 00:04:52.628 ************************************ 00:04:52.628 21:48:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.628 21:48:36 rpc -- rpc/rpc.sh@84 -- # killprocess 1030621 00:04:52.628 21:48:36 rpc -- common/autotest_common.sh@950 -- # '[' -z 1030621 ']' 00:04:52.628 21:48:36 rpc -- common/autotest_common.sh@954 -- # kill -0 1030621 00:04:52.628 21:48:36 rpc -- common/autotest_common.sh@955 -- # uname 00:04:52.628 21:48:36 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.628 21:48:36 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030621 00:04:52.887 21:48:37 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.887 21:48:37 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.887 21:48:37 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030621' 00:04:52.887 killing process with pid 1030621 00:04:52.887 21:48:37 rpc -- common/autotest_common.sh@969 -- # kill 1030621 00:04:52.887 21:48:37 rpc -- common/autotest_common.sh@974 -- # wait 1030621 00:04:53.146 00:04:53.146 real 0m2.135s 00:04:53.146 user 0m2.667s 00:04:53.146 sys 0m0.777s 00:04:53.146 21:48:37 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.146 21:48:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.146 ************************************ 00:04:53.146 END TEST rpc 00:04:53.146 ************************************ 00:04:53.146 21:48:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.146 21:48:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.146 21:48:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.146 21:48:37 -- common/autotest_common.sh@10 -- # set +x 00:04:53.146 ************************************ 00:04:53.146 START TEST skip_rpc 00:04:53.146 ************************************ 00:04:53.146 21:48:37 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:53.146 * Looking for test storage... 00:04:53.146 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:04:53.146 21:48:37 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.146 21:48:37 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.405 21:48:37 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.405 21:48:37 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.405 21:48:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.406 21:48:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.406 21:48:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.406 21:48:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.406 --rc genhtml_branch_coverage=1 00:04:53.406 --rc genhtml_function_coverage=1 00:04:53.406 --rc genhtml_legend=1 00:04:53.406 --rc geninfo_all_blocks=1 00:04:53.406 --rc geninfo_unexecuted_blocks=1 00:04:53.406 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.406 ' 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.406 --rc genhtml_branch_coverage=1 00:04:53.406 --rc genhtml_function_coverage=1 00:04:53.406 --rc genhtml_legend=1 00:04:53.406 --rc geninfo_all_blocks=1 00:04:53.406 --rc geninfo_unexecuted_blocks=1 00:04:53.406 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.406 ' 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.406 --rc genhtml_branch_coverage=1 00:04:53.406 --rc genhtml_function_coverage=1 00:04:53.406 --rc genhtml_legend=1 00:04:53.406 --rc geninfo_all_blocks=1 00:04:53.406 --rc geninfo_unexecuted_blocks=1 00:04:53.406 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.406 ' 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.406 --rc genhtml_branch_coverage=1 00:04:53.406 --rc genhtml_function_coverage=1 00:04:53.406 --rc genhtml_legend=1 00:04:53.406 --rc geninfo_all_blocks=1 00:04:53.406 --rc geninfo_unexecuted_blocks=1 00:04:53.406 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:53.406 ' 00:04:53.406 21:48:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:53.406 21:48:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:04:53.406 21:48:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.406 21:48:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.406 ************************************ 00:04:53.406 START TEST skip_rpc 00:04:53.406 ************************************ 00:04:53.406 21:48:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:53.406 21:48:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1031086 00:04:53.406 21:48:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.406 21:48:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.406 21:48:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.406 [2024-09-30 21:48:37.651187] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:53.406 [2024-09-30 21:48:37.651242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1031086 ] 00:04:53.406 [2024-09-30 21:48:37.716390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.665 [2024-09-30 21:48:37.788885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1031086 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1031086 ']' 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1031086 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1031086 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1031086' 00:04:58.935 killing process with pid 1031086 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1031086 00:04:58.935 21:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1031086 00:04:58.935 00:04:58.935 real 0m5.387s 00:04:58.935 user 0m5.141s 00:04:58.935 sys 0m0.286s 00:04:58.935 21:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.935 21:48:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.935 ************************************ 00:04:58.935 END TEST skip_rpc 00:04:58.935 ************************************ 00:04:58.935 21:48:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.935 21:48:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.935 21:48:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.935 21:48:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.935 ************************************ 00:04:58.935 START TEST skip_rpc_with_json 00:04:58.935 ************************************ 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1032169 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1032169 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1032169 ']' 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.935 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.935 [2024-09-30 21:48:43.108305] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:58.935 [2024-09-30 21:48:43.108366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032169 ] 00:04:58.935 [2024-09-30 21:48:43.174382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.935 [2024-09-30 21:48:43.248454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.194 [2024-09-30 21:48:43.454274] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.194 request: 00:04:59.194 { 00:04:59.194 "trtype": "tcp", 00:04:59.194 "method": "nvmf_get_transports", 00:04:59.194 "req_id": 1 00:04:59.194 } 00:04:59.194 Got JSON-RPC error response 00:04:59.194 response: 00:04:59.194 { 00:04:59.194 "code": -19, 00:04:59.194 "message": "No such device" 00:04:59.194 } 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.194 [2024-09-30 21:48:43.462363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.194 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:04:59.455 { 00:04:59.455 "subsystems": [ 00:04:59.455 { 00:04:59.455 "subsystem": "scheduler", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "framework_set_scheduler", 00:04:59.455 "params": { 00:04:59.455 "name": "static" 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "vmd", 00:04:59.455 "config": [] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "sock", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "sock_set_default_impl", 00:04:59.455 "params": { 00:04:59.455 "impl_name": "posix" 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "sock_impl_set_options", 00:04:59.455 "params": { 00:04:59.455 "impl_name": "ssl", 00:04:59.455 "recv_buf_size": 4096, 00:04:59.455 "send_buf_size": 4096, 00:04:59.455 "enable_recv_pipe": true, 00:04:59.455 "enable_quickack": false, 00:04:59.455 "enable_placement_id": 0, 00:04:59.455 "enable_zerocopy_send_server": true, 00:04:59.455 "enable_zerocopy_send_client": false, 00:04:59.455 "zerocopy_threshold": 0, 00:04:59.455 "tls_version": 0, 00:04:59.455 "enable_ktls": false 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "sock_impl_set_options", 00:04:59.455 "params": { 00:04:59.455 "impl_name": "posix", 00:04:59.455 "recv_buf_size": 2097152, 00:04:59.455 "send_buf_size": 2097152, 00:04:59.455 "enable_recv_pipe": true, 00:04:59.455 "enable_quickack": false, 00:04:59.455 "enable_placement_id": 0, 00:04:59.455 "enable_zerocopy_send_server": true, 00:04:59.455 "enable_zerocopy_send_client": false, 00:04:59.455 "zerocopy_threshold": 0, 00:04:59.455 "tls_version": 0, 00:04:59.455 "enable_ktls": false 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "iobuf", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "iobuf_set_options", 00:04:59.455 "params": { 00:04:59.455 "small_pool_count": 8192, 00:04:59.455 "large_pool_count": 1024, 00:04:59.455 "small_bufsize": 8192, 00:04:59.455 "large_bufsize": 135168 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "keyring", 00:04:59.455 "config": [] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "vfio_user_target", 00:04:59.455 "config": null 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "fsdev", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "fsdev_set_opts", 00:04:59.455 "params": { 00:04:59.455 "fsdev_io_pool_size": 65535, 00:04:59.455 "fsdev_io_cache_size": 256 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "accel", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "accel_set_options", 00:04:59.455 "params": { 00:04:59.455 "small_cache_size": 128, 00:04:59.455 "large_cache_size": 16, 00:04:59.455 "task_count": 2048, 00:04:59.455 "sequence_count": 2048, 00:04:59.455 "buf_count": 2048 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "bdev", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "bdev_set_options", 00:04:59.455 "params": { 00:04:59.455 "bdev_io_pool_size": 65535, 00:04:59.455 "bdev_io_cache_size": 256, 00:04:59.455 "bdev_auto_examine": true, 00:04:59.455 "iobuf_small_cache_size": 128, 00:04:59.455 "iobuf_large_cache_size": 16 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "bdev_raid_set_options", 00:04:59.455 "params": { 00:04:59.455 "process_window_size_kb": 1024, 00:04:59.455 "process_max_bandwidth_mb_sec": 0 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "bdev_nvme_set_options", 00:04:59.455 "params": { 00:04:59.455 "action_on_timeout": "none", 00:04:59.455 "timeout_us": 0, 00:04:59.455 "timeout_admin_us": 0, 00:04:59.455 "keep_alive_timeout_ms": 10000, 00:04:59.455 "arbitration_burst": 0, 00:04:59.455 "low_priority_weight": 0, 00:04:59.455 "medium_priority_weight": 0, 00:04:59.455 "high_priority_weight": 0, 00:04:59.455 "nvme_adminq_poll_period_us": 10000, 00:04:59.455 "nvme_ioq_poll_period_us": 0, 00:04:59.455 "io_queue_requests": 0, 00:04:59.455 "delay_cmd_submit": true, 00:04:59.455 "transport_retry_count": 4, 00:04:59.455 "bdev_retry_count": 3, 00:04:59.455 "transport_ack_timeout": 0, 00:04:59.455 "ctrlr_loss_timeout_sec": 0, 00:04:59.455 "reconnect_delay_sec": 0, 00:04:59.455 "fast_io_fail_timeout_sec": 0, 00:04:59.455 "disable_auto_failback": false, 00:04:59.455 "generate_uuids": false, 00:04:59.455 "transport_tos": 0, 00:04:59.455 "nvme_error_stat": false, 00:04:59.455 "rdma_srq_size": 0, 00:04:59.455 "io_path_stat": false, 00:04:59.455 "allow_accel_sequence": false, 00:04:59.455 "rdma_max_cq_size": 0, 00:04:59.455 "rdma_cm_event_timeout_ms": 0, 00:04:59.455 "dhchap_digests": [ 00:04:59.455 "sha256", 00:04:59.455 "sha384", 00:04:59.455 "sha512" 00:04:59.455 ], 00:04:59.455 "dhchap_dhgroups": [ 00:04:59.455 "null", 00:04:59.455 "ffdhe2048", 00:04:59.455 "ffdhe3072", 00:04:59.455 "ffdhe4096", 00:04:59.455 "ffdhe6144", 00:04:59.455 "ffdhe8192" 00:04:59.455 ] 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "bdev_nvme_set_hotplug", 00:04:59.455 "params": { 00:04:59.455 "period_us": 100000, 00:04:59.455 "enable": false 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "bdev_iscsi_set_options", 00:04:59.455 "params": { 00:04:59.455 "timeout_sec": 30 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "bdev_wait_for_examine" 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "nvmf", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "nvmf_set_config", 00:04:59.455 "params": { 00:04:59.455 "discovery_filter": "match_any", 00:04:59.455 "admin_cmd_passthru": { 00:04:59.455 "identify_ctrlr": false 00:04:59.455 }, 00:04:59.455 "dhchap_digests": [ 00:04:59.455 "sha256", 00:04:59.455 "sha384", 00:04:59.455 "sha512" 00:04:59.455 ], 00:04:59.455 "dhchap_dhgroups": [ 00:04:59.455 "null", 00:04:59.455 "ffdhe2048", 00:04:59.455 "ffdhe3072", 00:04:59.455 "ffdhe4096", 00:04:59.455 "ffdhe6144", 00:04:59.455 "ffdhe8192" 00:04:59.455 ] 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "nvmf_set_max_subsystems", 00:04:59.455 "params": { 00:04:59.455 "max_subsystems": 1024 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "nvmf_set_crdt", 00:04:59.455 "params": { 00:04:59.455 "crdt1": 0, 00:04:59.455 "crdt2": 0, 00:04:59.455 "crdt3": 0 00:04:59.455 } 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "method": "nvmf_create_transport", 00:04:59.455 "params": { 00:04:59.455 "trtype": "TCP", 00:04:59.455 "max_queue_depth": 128, 00:04:59.455 "max_io_qpairs_per_ctrlr": 127, 00:04:59.455 "in_capsule_data_size": 4096, 00:04:59.455 "max_io_size": 131072, 00:04:59.455 "io_unit_size": 131072, 00:04:59.455 "max_aq_depth": 128, 00:04:59.455 "num_shared_buffers": 511, 00:04:59.455 "buf_cache_size": 4294967295, 00:04:59.455 "dif_insert_or_strip": false, 00:04:59.455 "zcopy": false, 00:04:59.455 "c2h_success": true, 00:04:59.455 "sock_priority": 0, 00:04:59.455 "abort_timeout_sec": 1, 00:04:59.455 "ack_timeout": 0, 00:04:59.455 "data_wr_pool_size": 0 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "nbd", 00:04:59.455 "config": [] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "ublk", 00:04:59.455 "config": [] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "vhost_blk", 00:04:59.455 "config": [] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "scsi", 00:04:59.455 "config": null 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "iscsi", 00:04:59.455 "config": [ 00:04:59.455 { 00:04:59.455 "method": "iscsi_set_options", 00:04:59.455 "params": { 00:04:59.455 "node_base": "iqn.2016-06.io.spdk", 00:04:59.455 "max_sessions": 128, 00:04:59.455 "max_connections_per_session": 2, 00:04:59.455 "max_queue_depth": 64, 00:04:59.455 "default_time2wait": 2, 00:04:59.455 "default_time2retain": 20, 00:04:59.455 "first_burst_length": 8192, 00:04:59.455 "immediate_data": true, 00:04:59.455 "allow_duplicated_isid": false, 00:04:59.455 "error_recovery_level": 0, 00:04:59.455 "nop_timeout": 60, 00:04:59.455 "nop_in_interval": 30, 00:04:59.455 "disable_chap": false, 00:04:59.455 "require_chap": false, 00:04:59.455 "mutual_chap": false, 00:04:59.455 "chap_group": 0, 00:04:59.455 "max_large_datain_per_connection": 64, 00:04:59.455 "max_r2t_per_connection": 4, 00:04:59.455 "pdu_pool_size": 36864, 00:04:59.455 "immediate_data_pool_size": 16384, 00:04:59.455 "data_out_pool_size": 2048 00:04:59.455 } 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 }, 00:04:59.455 { 00:04:59.455 "subsystem": "vhost_scsi", 00:04:59.455 "config": [] 00:04:59.455 } 00:04:59.455 ] 00:04:59.455 } 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1032169 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1032169 ']' 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1032169 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1032169 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1032169' 00:04:59.455 killing process with pid 1032169 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1032169 00:04:59.455 21:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1032169 00:04:59.715 21:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1032200 00:04:59.715 21:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.715 21:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1032200 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1032200 ']' 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1032200 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1032200 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1032200' 00:05:04.992 killing process with pid 1032200 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1032200 00:05:04.992 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1032200 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:05.252 00:05:05.252 real 0m6.321s 00:05:05.252 user 0m5.977s 00:05:05.252 sys 0m0.646s 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 ************************************ 00:05:05.252 END TEST skip_rpc_with_json 00:05:05.252 ************************************ 00:05:05.252 21:48:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.252 21:48:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.252 21:48:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.252 21:48:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 ************************************ 00:05:05.252 START TEST skip_rpc_with_delay 00:05:05.252 ************************************ 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.252 [2024-09-30 21:48:49.508373] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.252 [2024-09-30 21:48:49.508466] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.252 00:05:05.252 real 0m0.041s 00:05:05.252 user 0m0.016s 00:05:05.252 sys 0m0.024s 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.252 21:48:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 ************************************ 00:05:05.252 END TEST skip_rpc_with_delay 00:05:05.252 ************************************ 00:05:05.252 21:48:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.252 21:48:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.252 21:48:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.252 21:48:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.252 21:48:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.252 21:48:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 ************************************ 00:05:05.252 START TEST exit_on_failed_rpc_init 00:05:05.252 ************************************ 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1033304 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1033304 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1033304 ']' 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.252 21:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.512 [2024-09-30 21:48:49.626811] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:05.512 [2024-09-30 21:48:49.626888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033304 ] 00:05:05.512 [2024-09-30 21:48:49.695640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.512 [2024-09-30 21:48:49.772654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.772 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.773 21:48:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.773 [2024-09-30 21:48:50.005398] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:05.773 [2024-09-30 21:48:50.005467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033316 ] 00:05:05.773 [2024-09-30 21:48:50.081080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.033 [2024-09-30 21:48:50.159678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.033 [2024-09-30 21:48:50.159756] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.033 [2024-09-30 21:48:50.159770] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.033 [2024-09-30 21:48:50.159777] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1033304 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1033304 ']' 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1033304 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1033304 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1033304' 00:05:06.033 killing process with pid 1033304 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1033304 00:05:06.033 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1033304 00:05:06.293 00:05:06.293 real 0m1.014s 00:05:06.293 user 0m1.082s 00:05:06.293 sys 0m0.421s 00:05:06.293 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.293 21:48:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.293 ************************************ 00:05:06.293 END TEST exit_on_failed_rpc_init 00:05:06.293 ************************************ 00:05:06.293 21:48:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:06.293 00:05:06.293 real 0m13.235s 00:05:06.293 user 0m12.402s 00:05:06.293 sys 0m1.695s 00:05:06.293 21:48:50 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.293 21:48:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.293 ************************************ 00:05:06.293 END TEST skip_rpc 00:05:06.293 ************************************ 00:05:06.553 21:48:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.553 21:48:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.553 21:48:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.553 21:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:06.553 ************************************ 00:05:06.553 START TEST rpc_client 00:05:06.553 ************************************ 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.553 * Looking for test storage... 00:05:06.553 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.553 21:48:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.553 --rc genhtml_branch_coverage=1 00:05:06.553 --rc genhtml_function_coverage=1 00:05:06.553 --rc genhtml_legend=1 00:05:06.553 --rc geninfo_all_blocks=1 00:05:06.553 --rc geninfo_unexecuted_blocks=1 00:05:06.553 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.553 ' 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.553 --rc genhtml_branch_coverage=1 00:05:06.553 --rc genhtml_function_coverage=1 00:05:06.553 --rc genhtml_legend=1 00:05:06.553 --rc geninfo_all_blocks=1 00:05:06.553 --rc geninfo_unexecuted_blocks=1 00:05:06.553 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.553 ' 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.553 --rc genhtml_branch_coverage=1 00:05:06.553 --rc genhtml_function_coverage=1 00:05:06.553 --rc genhtml_legend=1 00:05:06.553 --rc geninfo_all_blocks=1 00:05:06.553 --rc geninfo_unexecuted_blocks=1 00:05:06.553 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.553 ' 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.553 --rc genhtml_branch_coverage=1 00:05:06.553 --rc genhtml_function_coverage=1 00:05:06.553 --rc genhtml_legend=1 00:05:06.553 --rc geninfo_all_blocks=1 00:05:06.553 --rc geninfo_unexecuted_blocks=1 00:05:06.553 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.553 ' 00:05:06.553 21:48:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.553 OK 00:05:06.553 21:48:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.553 00:05:06.553 real 0m0.161s 00:05:06.553 user 0m0.079s 00:05:06.553 sys 0m0.096s 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.553 21:48:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.553 ************************************ 00:05:06.553 END TEST rpc_client 00:05:06.553 ************************************ 00:05:06.814 21:48:50 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.814 21:48:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.814 21:48:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.814 21:48:50 -- common/autotest_common.sh@10 -- # set +x 00:05:06.814 ************************************ 00:05:06.814 START TEST json_config 00:05:06.814 ************************************ 00:05:06.814 21:48:50 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.814 21:48:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.814 21:48:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.814 21:48:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.814 21:48:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.814 21:48:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.814 21:48:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:06.814 21:48:51 json_config -- scripts/common.sh@345 -- # : 1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.814 21:48:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.814 21:48:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@353 -- # local d=1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.814 21:48:51 json_config -- scripts/common.sh@355 -- # echo 1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.814 21:48:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@353 -- # local d=2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.814 21:48:51 json_config -- scripts/common.sh@355 -- # echo 2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.814 21:48:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.814 21:48:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.814 21:48:51 json_config -- scripts/common.sh@368 -- # return 0 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.814 --rc genhtml_branch_coverage=1 00:05:06.814 --rc genhtml_function_coverage=1 00:05:06.814 --rc genhtml_legend=1 00:05:06.814 --rc geninfo_all_blocks=1 00:05:06.814 --rc geninfo_unexecuted_blocks=1 00:05:06.814 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.814 ' 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.814 --rc genhtml_branch_coverage=1 00:05:06.814 --rc genhtml_function_coverage=1 00:05:06.814 --rc genhtml_legend=1 00:05:06.814 --rc geninfo_all_blocks=1 00:05:06.814 --rc geninfo_unexecuted_blocks=1 00:05:06.814 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.814 ' 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.814 --rc genhtml_branch_coverage=1 00:05:06.814 --rc genhtml_function_coverage=1 00:05:06.814 --rc genhtml_legend=1 00:05:06.814 --rc geninfo_all_blocks=1 00:05:06.814 --rc geninfo_unexecuted_blocks=1 00:05:06.814 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.814 ' 00:05:06.814 21:48:51 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:06.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.814 --rc genhtml_branch_coverage=1 00:05:06.814 --rc genhtml_function_coverage=1 00:05:06.814 --rc genhtml_legend=1 00:05:06.814 --rc geninfo_all_blocks=1 00:05:06.814 --rc geninfo_unexecuted_blocks=1 00:05:06.814 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:06.814 ' 00:05:06.814 21:48:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.814 21:48:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:06.814 21:48:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.814 21:48:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.814 21:48:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.814 21:48:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.814 21:48:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.814 21:48:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.814 21:48:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.814 21:48:51 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.815 21:48:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@51 -- # : 0 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.815 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.815 21:48:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:06.815 WARNING: No tests are enabled so not running JSON configuration tests 00:05:06.815 21:48:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:06.815 00:05:06.815 real 0m0.186s 00:05:06.815 user 0m0.116s 00:05:06.815 sys 0m0.079s 00:05:06.815 21:48:51 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.815 21:48:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.815 ************************************ 00:05:06.815 END TEST json_config 00:05:06.815 ************************************ 00:05:07.075 21:48:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.075 21:48:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.075 21:48:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.075 21:48:51 -- common/autotest_common.sh@10 -- # set +x 00:05:07.075 ************************************ 00:05:07.075 START TEST json_config_extra_key 00:05:07.075 ************************************ 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.075 21:48:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.075 --rc genhtml_branch_coverage=1 00:05:07.075 --rc genhtml_function_coverage=1 00:05:07.075 --rc genhtml_legend=1 00:05:07.075 --rc geninfo_all_blocks=1 00:05:07.075 --rc geninfo_unexecuted_blocks=1 00:05:07.075 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.075 ' 00:05:07.075 21:48:51 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.075 --rc genhtml_branch_coverage=1 00:05:07.075 --rc genhtml_function_coverage=1 00:05:07.075 --rc genhtml_legend=1 00:05:07.075 --rc geninfo_all_blocks=1 00:05:07.075 --rc geninfo_unexecuted_blocks=1 00:05:07.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.076 ' 00:05:07.076 21:48:51 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.076 --rc genhtml_branch_coverage=1 00:05:07.076 --rc genhtml_function_coverage=1 00:05:07.076 --rc genhtml_legend=1 00:05:07.076 --rc geninfo_all_blocks=1 00:05:07.076 --rc geninfo_unexecuted_blocks=1 00:05:07.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.076 ' 00:05:07.076 21:48:51 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.076 --rc genhtml_branch_coverage=1 00:05:07.076 --rc genhtml_function_coverage=1 00:05:07.076 --rc genhtml_legend=1 00:05:07.076 --rc geninfo_all_blocks=1 00:05:07.076 --rc geninfo_unexecuted_blocks=1 00:05:07.076 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:07.076 ' 00:05:07.076 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.076 21:48:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:07.076 21:48:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.336 21:48:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.336 21:48:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.336 21:48:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.336 21:48:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.336 21:48:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.336 21:48:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.336 21:48:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:07.336 21:48:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.336 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.336 21:48:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:07.336 INFO: launching applications... 00:05:07.336 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1033746 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.337 Waiting for target to run... 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1033746 /var/tmp/spdk_tgt.sock 00:05:07.337 21:48:51 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1033746 ']' 00:05:07.337 21:48:51 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.337 21:48:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:07.337 21:48:51 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.337 21:48:51 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.337 21:48:51 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.337 21:48:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.337 [2024-09-30 21:48:51.482418] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:07.337 [2024-09-30 21:48:51.482479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033746 ] 00:05:07.596 [2024-09-30 21:48:51.763332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.596 [2024-09-30 21:48:51.829623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.165 21:48:52 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.165 21:48:52 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:08.165 21:48:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.165 00:05:08.165 21:48:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.165 INFO: shutting down applications... 00:05:08.165 21:48:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.165 21:48:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.165 21:48:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.165 21:48:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1033746 ]] 00:05:08.166 21:48:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1033746 00:05:08.166 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.166 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.166 21:48:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1033746 00:05:08.166 21:48:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1033746 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.735 21:48:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.735 SPDK target shutdown done 00:05:08.735 21:48:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:08.735 Success 00:05:08.735 00:05:08.735 real 0m1.592s 00:05:08.735 user 0m1.359s 00:05:08.735 sys 0m0.423s 00:05:08.735 21:48:52 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.735 21:48:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.735 ************************************ 00:05:08.735 END TEST json_config_extra_key 00:05:08.735 ************************************ 00:05:08.735 21:48:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.735 21:48:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.735 21:48:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.735 21:48:52 -- common/autotest_common.sh@10 -- # set +x 00:05:08.735 ************************************ 00:05:08.735 START TEST alias_rpc 00:05:08.735 ************************************ 00:05:08.735 21:48:52 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.735 * Looking for test storage... 00:05:08.735 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.735 21:48:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.735 --rc genhtml_branch_coverage=1 00:05:08.735 --rc genhtml_function_coverage=1 00:05:08.735 --rc genhtml_legend=1 00:05:08.735 --rc geninfo_all_blocks=1 00:05:08.735 --rc geninfo_unexecuted_blocks=1 00:05:08.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.735 ' 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.735 --rc genhtml_branch_coverage=1 00:05:08.735 --rc genhtml_function_coverage=1 00:05:08.735 --rc genhtml_legend=1 00:05:08.735 --rc geninfo_all_blocks=1 00:05:08.735 --rc geninfo_unexecuted_blocks=1 00:05:08.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.735 ' 00:05:08.735 21:48:53 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.735 --rc genhtml_branch_coverage=1 00:05:08.735 --rc genhtml_function_coverage=1 00:05:08.735 --rc genhtml_legend=1 00:05:08.735 --rc geninfo_all_blocks=1 00:05:08.735 --rc geninfo_unexecuted_blocks=1 00:05:08.735 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.735 ' 00:05:08.995 21:48:53 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.996 --rc genhtml_branch_coverage=1 00:05:08.996 --rc genhtml_function_coverage=1 00:05:08.996 --rc genhtml_legend=1 00:05:08.996 --rc geninfo_all_blocks=1 00:05:08.996 --rc geninfo_unexecuted_blocks=1 00:05:08.996 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:08.996 ' 00:05:08.996 21:48:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.996 21:48:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.996 21:48:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1034074 00:05:08.996 21:48:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1034074 00:05:08.996 21:48:53 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1034074 ']' 00:05:08.996 21:48:53 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.996 21:48:53 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.996 21:48:53 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.996 21:48:53 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.996 21:48:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.996 [2024-09-30 21:48:53.119511] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:08.996 [2024-09-30 21:48:53.119568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034074 ] 00:05:08.996 [2024-09-30 21:48:53.184837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.996 [2024-09-30 21:48:53.256735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.255 21:48:53 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.255 21:48:53 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.255 21:48:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:09.516 21:48:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1034074 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1034074 ']' 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1034074 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034074 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034074' 00:05:09.516 killing process with pid 1034074 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@969 -- # kill 1034074 00:05:09.516 21:48:53 alias_rpc -- common/autotest_common.sh@974 -- # wait 1034074 00:05:09.776 00:05:09.776 real 0m1.140s 00:05:09.776 user 0m1.161s 00:05:09.776 sys 0m0.424s 00:05:09.776 21:48:54 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.776 21:48:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.776 ************************************ 00:05:09.776 END TEST alias_rpc 00:05:09.776 ************************************ 00:05:09.776 21:48:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:09.776 21:48:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.776 21:48:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.776 21:48:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.776 21:48:54 -- common/autotest_common.sh@10 -- # set +x 00:05:09.776 ************************************ 00:05:09.776 START TEST spdkcli_tcp 00:05:09.776 ************************************ 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.036 * Looking for test storage... 00:05:10.036 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.036 21:48:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.036 --rc genhtml_branch_coverage=1 00:05:10.036 --rc genhtml_function_coverage=1 00:05:10.036 --rc genhtml_legend=1 00:05:10.036 --rc geninfo_all_blocks=1 00:05:10.036 --rc geninfo_unexecuted_blocks=1 00:05:10.036 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.036 ' 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.036 --rc genhtml_branch_coverage=1 00:05:10.036 --rc genhtml_function_coverage=1 00:05:10.036 --rc genhtml_legend=1 00:05:10.036 --rc geninfo_all_blocks=1 00:05:10.036 --rc geninfo_unexecuted_blocks=1 00:05:10.036 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.036 ' 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.036 --rc genhtml_branch_coverage=1 00:05:10.036 --rc genhtml_function_coverage=1 00:05:10.036 --rc genhtml_legend=1 00:05:10.036 --rc geninfo_all_blocks=1 00:05:10.036 --rc geninfo_unexecuted_blocks=1 00:05:10.036 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.036 ' 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.036 --rc genhtml_branch_coverage=1 00:05:10.036 --rc genhtml_function_coverage=1 00:05:10.036 --rc genhtml_legend=1 00:05:10.036 --rc geninfo_all_blocks=1 00:05:10.036 --rc geninfo_unexecuted_blocks=1 00:05:10.036 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:10.036 ' 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1034397 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1034397 00:05:10.036 21:48:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1034397 ']' 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.036 21:48:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.036 [2024-09-30 21:48:54.369833] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:10.036 [2024-09-30 21:48:54.369901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034397 ] 00:05:10.296 [2024-09-30 21:48:54.438724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.296 [2024-09-30 21:48:54.514638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.296 [2024-09-30 21:48:54.514640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.865 21:48:55 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.865 21:48:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:10.865 21:48:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1034660 00:05:10.865 21:48:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.865 21:48:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.125 [ 00:05:11.126 "spdk_get_version", 00:05:11.126 "rpc_get_methods", 00:05:11.126 "notify_get_notifications", 00:05:11.126 "notify_get_types", 00:05:11.126 "trace_get_info", 00:05:11.126 "trace_get_tpoint_group_mask", 00:05:11.126 "trace_disable_tpoint_group", 00:05:11.126 "trace_enable_tpoint_group", 00:05:11.126 "trace_clear_tpoint_mask", 00:05:11.126 "trace_set_tpoint_mask", 00:05:11.126 "fsdev_set_opts", 00:05:11.126 "fsdev_get_opts", 00:05:11.126 "framework_get_pci_devices", 00:05:11.126 "framework_get_config", 00:05:11.126 "framework_get_subsystems", 00:05:11.126 "vfu_tgt_set_base_path", 00:05:11.126 "keyring_get_keys", 00:05:11.126 "iobuf_get_stats", 00:05:11.126 "iobuf_set_options", 00:05:11.126 "sock_get_default_impl", 00:05:11.126 "sock_set_default_impl", 00:05:11.126 "sock_impl_set_options", 00:05:11.126 "sock_impl_get_options", 00:05:11.126 "vmd_rescan", 00:05:11.126 "vmd_remove_device", 00:05:11.126 "vmd_enable", 00:05:11.126 "accel_get_stats", 00:05:11.126 "accel_set_options", 00:05:11.126 "accel_set_driver", 00:05:11.126 "accel_crypto_key_destroy", 00:05:11.126 "accel_crypto_keys_get", 00:05:11.126 "accel_crypto_key_create", 00:05:11.126 "accel_assign_opc", 00:05:11.126 "accel_get_module_info", 00:05:11.126 "accel_get_opc_assignments", 00:05:11.126 "bdev_get_histogram", 00:05:11.126 "bdev_enable_histogram", 00:05:11.126 "bdev_set_qos_limit", 00:05:11.126 "bdev_set_qd_sampling_period", 00:05:11.126 "bdev_get_bdevs", 00:05:11.126 "bdev_reset_iostat", 00:05:11.126 "bdev_get_iostat", 00:05:11.126 "bdev_examine", 00:05:11.126 "bdev_wait_for_examine", 00:05:11.126 "bdev_set_options", 00:05:11.126 "scsi_get_devices", 00:05:11.126 "thread_set_cpumask", 00:05:11.126 "scheduler_set_options", 00:05:11.126 "framework_get_governor", 00:05:11.126 "framework_get_scheduler", 00:05:11.126 "framework_set_scheduler", 00:05:11.126 "framework_get_reactors", 00:05:11.126 "thread_get_io_channels", 00:05:11.126 "thread_get_pollers", 00:05:11.126 "thread_get_stats", 00:05:11.126 "framework_monitor_context_switch", 00:05:11.126 "spdk_kill_instance", 00:05:11.126 "log_enable_timestamps", 00:05:11.126 "log_get_flags", 00:05:11.126 "log_clear_flag", 00:05:11.126 "log_set_flag", 00:05:11.126 "log_get_level", 00:05:11.126 "log_set_level", 00:05:11.126 "log_get_print_level", 00:05:11.126 "log_set_print_level", 00:05:11.126 "framework_enable_cpumask_locks", 00:05:11.126 "framework_disable_cpumask_locks", 00:05:11.126 "framework_wait_init", 00:05:11.126 "framework_start_init", 00:05:11.126 "virtio_blk_create_transport", 00:05:11.126 "virtio_blk_get_transports", 00:05:11.126 "vhost_controller_set_coalescing", 00:05:11.126 "vhost_get_controllers", 00:05:11.126 "vhost_delete_controller", 00:05:11.126 "vhost_create_blk_controller", 00:05:11.126 "vhost_scsi_controller_remove_target", 00:05:11.126 "vhost_scsi_controller_add_target", 00:05:11.126 "vhost_start_scsi_controller", 00:05:11.126 "vhost_create_scsi_controller", 00:05:11.126 "ublk_recover_disk", 00:05:11.126 "ublk_get_disks", 00:05:11.126 "ublk_stop_disk", 00:05:11.126 "ublk_start_disk", 00:05:11.126 "ublk_destroy_target", 00:05:11.126 "ublk_create_target", 00:05:11.126 "nbd_get_disks", 00:05:11.126 "nbd_stop_disk", 00:05:11.126 "nbd_start_disk", 00:05:11.126 "env_dpdk_get_mem_stats", 00:05:11.126 "nvmf_stop_mdns_prr", 00:05:11.126 "nvmf_publish_mdns_prr", 00:05:11.126 "nvmf_subsystem_get_listeners", 00:05:11.126 "nvmf_subsystem_get_qpairs", 00:05:11.126 "nvmf_subsystem_get_controllers", 00:05:11.126 "nvmf_get_stats", 00:05:11.126 "nvmf_get_transports", 00:05:11.126 "nvmf_create_transport", 00:05:11.126 "nvmf_get_targets", 00:05:11.126 "nvmf_delete_target", 00:05:11.126 "nvmf_create_target", 00:05:11.126 "nvmf_subsystem_allow_any_host", 00:05:11.126 "nvmf_subsystem_set_keys", 00:05:11.126 "nvmf_subsystem_remove_host", 00:05:11.126 "nvmf_subsystem_add_host", 00:05:11.126 "nvmf_ns_remove_host", 00:05:11.126 "nvmf_ns_add_host", 00:05:11.126 "nvmf_subsystem_remove_ns", 00:05:11.126 "nvmf_subsystem_set_ns_ana_group", 00:05:11.126 "nvmf_subsystem_add_ns", 00:05:11.126 "nvmf_subsystem_listener_set_ana_state", 00:05:11.126 "nvmf_discovery_get_referrals", 00:05:11.126 "nvmf_discovery_remove_referral", 00:05:11.126 "nvmf_discovery_add_referral", 00:05:11.126 "nvmf_subsystem_remove_listener", 00:05:11.126 "nvmf_subsystem_add_listener", 00:05:11.126 "nvmf_delete_subsystem", 00:05:11.126 "nvmf_create_subsystem", 00:05:11.126 "nvmf_get_subsystems", 00:05:11.126 "nvmf_set_crdt", 00:05:11.126 "nvmf_set_config", 00:05:11.126 "nvmf_set_max_subsystems", 00:05:11.126 "iscsi_get_histogram", 00:05:11.126 "iscsi_enable_histogram", 00:05:11.126 "iscsi_set_options", 00:05:11.126 "iscsi_get_auth_groups", 00:05:11.126 "iscsi_auth_group_remove_secret", 00:05:11.126 "iscsi_auth_group_add_secret", 00:05:11.126 "iscsi_delete_auth_group", 00:05:11.126 "iscsi_create_auth_group", 00:05:11.126 "iscsi_set_discovery_auth", 00:05:11.126 "iscsi_get_options", 00:05:11.126 "iscsi_target_node_request_logout", 00:05:11.126 "iscsi_target_node_set_redirect", 00:05:11.126 "iscsi_target_node_set_auth", 00:05:11.126 "iscsi_target_node_add_lun", 00:05:11.126 "iscsi_get_stats", 00:05:11.126 "iscsi_get_connections", 00:05:11.126 "iscsi_portal_group_set_auth", 00:05:11.126 "iscsi_start_portal_group", 00:05:11.126 "iscsi_delete_portal_group", 00:05:11.126 "iscsi_create_portal_group", 00:05:11.126 "iscsi_get_portal_groups", 00:05:11.126 "iscsi_delete_target_node", 00:05:11.126 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.126 "iscsi_target_node_add_pg_ig_maps", 00:05:11.126 "iscsi_create_target_node", 00:05:11.126 "iscsi_get_target_nodes", 00:05:11.126 "iscsi_delete_initiator_group", 00:05:11.126 "iscsi_initiator_group_remove_initiators", 00:05:11.126 "iscsi_initiator_group_add_initiators", 00:05:11.126 "iscsi_create_initiator_group", 00:05:11.126 "iscsi_get_initiator_groups", 00:05:11.126 "fsdev_aio_delete", 00:05:11.126 "fsdev_aio_create", 00:05:11.126 "keyring_linux_set_options", 00:05:11.126 "keyring_file_remove_key", 00:05:11.126 "keyring_file_add_key", 00:05:11.126 "vfu_virtio_create_fs_endpoint", 00:05:11.126 "vfu_virtio_create_scsi_endpoint", 00:05:11.126 "vfu_virtio_scsi_remove_target", 00:05:11.126 "vfu_virtio_scsi_add_target", 00:05:11.126 "vfu_virtio_create_blk_endpoint", 00:05:11.126 "vfu_virtio_delete_endpoint", 00:05:11.126 "iaa_scan_accel_module", 00:05:11.126 "dsa_scan_accel_module", 00:05:11.126 "ioat_scan_accel_module", 00:05:11.126 "accel_error_inject_error", 00:05:11.126 "bdev_iscsi_delete", 00:05:11.126 "bdev_iscsi_create", 00:05:11.126 "bdev_iscsi_set_options", 00:05:11.126 "bdev_virtio_attach_controller", 00:05:11.126 "bdev_virtio_scsi_get_devices", 00:05:11.126 "bdev_virtio_detach_controller", 00:05:11.126 "bdev_virtio_blk_set_hotplug", 00:05:11.126 "bdev_ftl_set_property", 00:05:11.126 "bdev_ftl_get_properties", 00:05:11.126 "bdev_ftl_get_stats", 00:05:11.126 "bdev_ftl_unmap", 00:05:11.126 "bdev_ftl_unload", 00:05:11.126 "bdev_ftl_delete", 00:05:11.126 "bdev_ftl_load", 00:05:11.126 "bdev_ftl_create", 00:05:11.126 "bdev_aio_delete", 00:05:11.126 "bdev_aio_rescan", 00:05:11.126 "bdev_aio_create", 00:05:11.126 "blobfs_create", 00:05:11.126 "blobfs_detect", 00:05:11.126 "blobfs_set_cache_size", 00:05:11.126 "bdev_zone_block_delete", 00:05:11.126 "bdev_zone_block_create", 00:05:11.126 "bdev_delay_delete", 00:05:11.126 "bdev_delay_create", 00:05:11.126 "bdev_delay_update_latency", 00:05:11.126 "bdev_split_delete", 00:05:11.126 "bdev_split_create", 00:05:11.126 "bdev_error_inject_error", 00:05:11.126 "bdev_error_delete", 00:05:11.126 "bdev_error_create", 00:05:11.126 "bdev_raid_set_options", 00:05:11.126 "bdev_raid_remove_base_bdev", 00:05:11.126 "bdev_raid_add_base_bdev", 00:05:11.126 "bdev_raid_delete", 00:05:11.126 "bdev_raid_create", 00:05:11.126 "bdev_raid_get_bdevs", 00:05:11.126 "bdev_lvol_set_parent_bdev", 00:05:11.126 "bdev_lvol_set_parent", 00:05:11.126 "bdev_lvol_check_shallow_copy", 00:05:11.126 "bdev_lvol_start_shallow_copy", 00:05:11.126 "bdev_lvol_grow_lvstore", 00:05:11.126 "bdev_lvol_get_lvols", 00:05:11.126 "bdev_lvol_get_lvstores", 00:05:11.126 "bdev_lvol_delete", 00:05:11.126 "bdev_lvol_set_read_only", 00:05:11.126 "bdev_lvol_resize", 00:05:11.126 "bdev_lvol_decouple_parent", 00:05:11.127 "bdev_lvol_inflate", 00:05:11.127 "bdev_lvol_rename", 00:05:11.127 "bdev_lvol_clone_bdev", 00:05:11.127 "bdev_lvol_clone", 00:05:11.127 "bdev_lvol_snapshot", 00:05:11.127 "bdev_lvol_create", 00:05:11.127 "bdev_lvol_delete_lvstore", 00:05:11.127 "bdev_lvol_rename_lvstore", 00:05:11.127 "bdev_lvol_create_lvstore", 00:05:11.127 "bdev_passthru_delete", 00:05:11.127 "bdev_passthru_create", 00:05:11.127 "bdev_nvme_cuse_unregister", 00:05:11.127 "bdev_nvme_cuse_register", 00:05:11.127 "bdev_opal_new_user", 00:05:11.127 "bdev_opal_set_lock_state", 00:05:11.127 "bdev_opal_delete", 00:05:11.127 "bdev_opal_get_info", 00:05:11.127 "bdev_opal_create", 00:05:11.127 "bdev_nvme_opal_revert", 00:05:11.127 "bdev_nvme_opal_init", 00:05:11.127 "bdev_nvme_send_cmd", 00:05:11.127 "bdev_nvme_set_keys", 00:05:11.127 "bdev_nvme_get_path_iostat", 00:05:11.127 "bdev_nvme_get_mdns_discovery_info", 00:05:11.127 "bdev_nvme_stop_mdns_discovery", 00:05:11.127 "bdev_nvme_start_mdns_discovery", 00:05:11.127 "bdev_nvme_set_multipath_policy", 00:05:11.127 "bdev_nvme_set_preferred_path", 00:05:11.127 "bdev_nvme_get_io_paths", 00:05:11.127 "bdev_nvme_remove_error_injection", 00:05:11.127 "bdev_nvme_add_error_injection", 00:05:11.127 "bdev_nvme_get_discovery_info", 00:05:11.127 "bdev_nvme_stop_discovery", 00:05:11.127 "bdev_nvme_start_discovery", 00:05:11.127 "bdev_nvme_get_controller_health_info", 00:05:11.127 "bdev_nvme_disable_controller", 00:05:11.127 "bdev_nvme_enable_controller", 00:05:11.127 "bdev_nvme_reset_controller", 00:05:11.127 "bdev_nvme_get_transport_statistics", 00:05:11.127 "bdev_nvme_apply_firmware", 00:05:11.127 "bdev_nvme_detach_controller", 00:05:11.127 "bdev_nvme_get_controllers", 00:05:11.127 "bdev_nvme_attach_controller", 00:05:11.127 "bdev_nvme_set_hotplug", 00:05:11.127 "bdev_nvme_set_options", 00:05:11.127 "bdev_null_resize", 00:05:11.127 "bdev_null_delete", 00:05:11.127 "bdev_null_create", 00:05:11.127 "bdev_malloc_delete", 00:05:11.127 "bdev_malloc_create" 00:05:11.127 ] 00:05:11.127 21:48:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.127 21:48:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.127 21:48:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1034397 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1034397 ']' 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1034397 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.127 21:48:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034397 00:05:11.387 21:48:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.387 21:48:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.387 21:48:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034397' 00:05:11.387 killing process with pid 1034397 00:05:11.387 21:48:55 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1034397 00:05:11.387 21:48:55 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1034397 00:05:11.647 00:05:11.647 real 0m1.686s 00:05:11.647 user 0m3.047s 00:05:11.647 sys 0m0.545s 00:05:11.647 21:48:55 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.647 21:48:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.647 ************************************ 00:05:11.647 END TEST spdkcli_tcp 00:05:11.647 ************************************ 00:05:11.647 21:48:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.647 21:48:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.647 21:48:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.647 21:48:55 -- common/autotest_common.sh@10 -- # set +x 00:05:11.647 ************************************ 00:05:11.647 START TEST dpdk_mem_utility 00:05:11.647 ************************************ 00:05:11.647 21:48:55 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.647 * Looking for test storage... 00:05:11.907 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.907 21:48:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.907 --rc genhtml_branch_coverage=1 00:05:11.907 --rc genhtml_function_coverage=1 00:05:11.907 --rc genhtml_legend=1 00:05:11.907 --rc geninfo_all_blocks=1 00:05:11.907 --rc geninfo_unexecuted_blocks=1 00:05:11.907 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.907 ' 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.907 --rc genhtml_branch_coverage=1 00:05:11.907 --rc genhtml_function_coverage=1 00:05:11.907 --rc genhtml_legend=1 00:05:11.907 --rc geninfo_all_blocks=1 00:05:11.907 --rc geninfo_unexecuted_blocks=1 00:05:11.907 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.907 ' 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.907 --rc genhtml_branch_coverage=1 00:05:11.907 --rc genhtml_function_coverage=1 00:05:11.907 --rc genhtml_legend=1 00:05:11.907 --rc geninfo_all_blocks=1 00:05:11.907 --rc geninfo_unexecuted_blocks=1 00:05:11.907 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.907 ' 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.907 --rc genhtml_branch_coverage=1 00:05:11.907 --rc genhtml_function_coverage=1 00:05:11.907 --rc genhtml_legend=1 00:05:11.907 --rc geninfo_all_blocks=1 00:05:11.907 --rc geninfo_unexecuted_blocks=1 00:05:11.907 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:11.907 ' 00:05:11.907 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.907 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1034747 00:05:11.907 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.907 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1034747 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1034747 ']' 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.907 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.908 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.908 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.908 [2024-09-30 21:48:56.134671] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:11.908 [2024-09-30 21:48:56.134738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034747 ] 00:05:11.908 [2024-09-30 21:48:56.200652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.908 [2024-09-30 21:48:56.271659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.168 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.168 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:12.168 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.168 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.168 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.168 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.168 { 00:05:12.168 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.168 } 00:05:12.168 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.168 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.428 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:12.428 1 heaps totaling size 860.000000 MiB 00:05:12.428 size: 860.000000 MiB heap id: 0 00:05:12.428 end heaps---------- 00:05:12.428 9 mempools totaling size 642.649841 MiB 00:05:12.428 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.428 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.428 size: 92.545471 MiB name: bdev_io_1034747 00:05:12.428 size: 51.011292 MiB name: evtpool_1034747 00:05:12.428 size: 50.003479 MiB name: msgpool_1034747 00:05:12.428 size: 36.509338 MiB name: fsdev_io_1034747 00:05:12.428 size: 21.763794 MiB name: PDU_Pool 00:05:12.428 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.428 size: 0.026123 MiB name: Session_Pool 00:05:12.428 end mempools------- 00:05:12.428 6 memzones totaling size 4.142822 MiB 00:05:12.428 size: 1.000366 MiB name: RG_ring_0_1034747 00:05:12.428 size: 1.000366 MiB name: RG_ring_1_1034747 00:05:12.428 size: 1.000366 MiB name: RG_ring_4_1034747 00:05:12.428 size: 1.000366 MiB name: RG_ring_5_1034747 00:05:12.428 size: 0.125366 MiB name: RG_ring_2_1034747 00:05:12.428 size: 0.015991 MiB name: RG_ring_3_1034747 00:05:12.428 end memzones------- 00:05:12.428 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:12.428 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:12.428 list of free elements. size: 13.984680 MiB 00:05:12.428 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:12.428 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:12.428 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:12.428 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:12.428 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:12.428 element at address: 0x20000b200000 with size: 0.959839 MiB 00:05:12.428 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:12.428 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:12.428 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:12.428 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:12.428 element at address: 0x200003e00000 with size: 0.495605 MiB 00:05:12.428 element at address: 0x200007000000 with size: 0.490723 MiB 00:05:12.428 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:12.428 element at address: 0x200013800000 with size: 0.481934 MiB 00:05:12.428 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:12.428 element at address: 0x200003a00000 with size: 0.354858 MiB 00:05:12.428 list of standard malloc elements. size: 199.218628 MiB 00:05:12.428 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:12.428 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:12.428 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:12.428 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:12.428 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:12.428 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:12.428 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:12.428 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:12.428 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:12.428 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003a5ad80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20000707da00 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20000707dac0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001387b600 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001387b6c0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x2000138fb980 with size: 0.000183 MiB 00:05:12.428 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:12.428 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:12.429 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:12.429 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:12.429 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:12.429 list of memzone associated elements. size: 646.796692 MiB 00:05:12.429 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:12.429 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:12.429 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:12.429 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:12.429 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:12.429 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1034747_0 00:05:12.429 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:12.429 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1034747_0 00:05:12.429 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:12.429 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1034747_0 00:05:12.429 element at address: 0x2000139fdb80 with size: 36.008911 MiB 00:05:12.429 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1034747_0 00:05:12.429 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:12.429 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:12.429 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:12.429 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:12.429 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:12.429 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1034747 00:05:12.429 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:12.429 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1034747 00:05:12.429 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:12.429 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1034747 00:05:12.429 element at address: 0x2000138fba40 with size: 1.008118 MiB 00:05:12.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:12.429 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:12.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:12.429 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:12.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:12.429 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:12.429 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:12.429 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:12.429 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1034747 00:05:12.429 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:12.429 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1034747 00:05:12.429 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:12.429 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1034747 00:05:12.429 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:12.429 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1034747 00:05:12.429 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:12.429 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1034747 00:05:12.429 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:12.429 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1034747 00:05:12.429 element at address: 0x20001387b780 with size: 0.500488 MiB 00:05:12.429 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:12.429 element at address: 0x20000707db80 with size: 0.500488 MiB 00:05:12.429 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:12.429 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:12.429 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:12.429 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:12.429 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1034747 00:05:12.429 element at address: 0x20000b2f5b80 with size: 0.031738 MiB 00:05:12.429 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:12.429 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:12.429 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:12.429 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:12.429 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1034747 00:05:12.429 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:12.429 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:12.429 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:12.429 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1034747 00:05:12.429 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:12.429 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1034747 00:05:12.429 element at address: 0x200003a5ae40 with size: 0.000305 MiB 00:05:12.429 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1034747 00:05:12.429 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:12.429 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:12.429 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:12.429 21:48:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1034747 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1034747 ']' 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1034747 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1034747 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1034747' 00:05:12.429 killing process with pid 1034747 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1034747 00:05:12.429 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1034747 00:05:12.689 00:05:12.689 real 0m1.061s 00:05:12.689 user 0m0.969s 00:05:12.689 sys 0m0.448s 00:05:12.689 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.689 21:48:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.689 ************************************ 00:05:12.689 END TEST dpdk_mem_utility 00:05:12.689 ************************************ 00:05:12.689 21:48:57 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:12.689 21:48:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.689 21:48:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.689 21:48:57 -- common/autotest_common.sh@10 -- # set +x 00:05:12.689 ************************************ 00:05:12.689 START TEST event 00:05:12.689 ************************************ 00:05:12.689 21:48:57 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:12.948 * Looking for test storage... 00:05:12.949 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.949 21:48:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.949 21:48:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.949 21:48:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.949 21:48:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.949 21:48:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.949 21:48:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.949 21:48:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.949 21:48:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.949 21:48:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.949 21:48:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.949 21:48:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.949 21:48:57 event -- scripts/common.sh@344 -- # case "$op" in 00:05:12.949 21:48:57 event -- scripts/common.sh@345 -- # : 1 00:05:12.949 21:48:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.949 21:48:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.949 21:48:57 event -- scripts/common.sh@365 -- # decimal 1 00:05:12.949 21:48:57 event -- scripts/common.sh@353 -- # local d=1 00:05:12.949 21:48:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.949 21:48:57 event -- scripts/common.sh@355 -- # echo 1 00:05:12.949 21:48:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.949 21:48:57 event -- scripts/common.sh@366 -- # decimal 2 00:05:12.949 21:48:57 event -- scripts/common.sh@353 -- # local d=2 00:05:12.949 21:48:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.949 21:48:57 event -- scripts/common.sh@355 -- # echo 2 00:05:12.949 21:48:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.949 21:48:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.949 21:48:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.949 21:48:57 event -- scripts/common.sh@368 -- # return 0 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.949 --rc genhtml_branch_coverage=1 00:05:12.949 --rc genhtml_function_coverage=1 00:05:12.949 --rc genhtml_legend=1 00:05:12.949 --rc geninfo_all_blocks=1 00:05:12.949 --rc geninfo_unexecuted_blocks=1 00:05:12.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.949 ' 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.949 --rc genhtml_branch_coverage=1 00:05:12.949 --rc genhtml_function_coverage=1 00:05:12.949 --rc genhtml_legend=1 00:05:12.949 --rc geninfo_all_blocks=1 00:05:12.949 --rc geninfo_unexecuted_blocks=1 00:05:12.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.949 ' 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.949 --rc genhtml_branch_coverage=1 00:05:12.949 --rc genhtml_function_coverage=1 00:05:12.949 --rc genhtml_legend=1 00:05:12.949 --rc geninfo_all_blocks=1 00:05:12.949 --rc geninfo_unexecuted_blocks=1 00:05:12.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.949 ' 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.949 --rc genhtml_branch_coverage=1 00:05:12.949 --rc genhtml_function_coverage=1 00:05:12.949 --rc genhtml_legend=1 00:05:12.949 --rc geninfo_all_blocks=1 00:05:12.949 --rc geninfo_unexecuted_blocks=1 00:05:12.949 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.949 ' 00:05:12.949 21:48:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:12.949 21:48:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.949 21:48:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:12.949 21:48:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.949 21:48:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.949 ************************************ 00:05:12.949 START TEST event_perf 00:05:12.949 ************************************ 00:05:12.949 21:48:57 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.949 Running I/O for 1 seconds...[2024-09-30 21:48:57.303738] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:12.949 [2024-09-30 21:48:57.303844] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035073 ] 00:05:13.208 [2024-09-30 21:48:57.374326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.208 [2024-09-30 21:48:57.448429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.208 [2024-09-30 21:48:57.448525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.208 [2024-09-30 21:48:57.448591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.208 [2024-09-30 21:48:57.448592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.143 Running I/O for 1 seconds... 00:05:14.143 lcore 0: 195471 00:05:14.143 lcore 1: 195473 00:05:14.143 lcore 2: 195474 00:05:14.143 lcore 3: 195474 00:05:14.143 done. 00:05:14.402 00:05:14.402 real 0m1.228s 00:05:14.402 user 0m4.129s 00:05:14.402 sys 0m0.095s 00:05:14.402 21:48:58 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.402 21:48:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.402 ************************************ 00:05:14.402 END TEST event_perf 00:05:14.402 ************************************ 00:05:14.402 21:48:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.402 21:48:58 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:14.402 21:48:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.402 21:48:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.402 ************************************ 00:05:14.402 START TEST event_reactor 00:05:14.402 ************************************ 00:05:14.402 21:48:58 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.402 [2024-09-30 21:48:58.616341] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:14.402 [2024-09-30 21:48:58.616428] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035358 ] 00:05:14.402 [2024-09-30 21:48:58.687369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.402 [2024-09-30 21:48:58.760539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.776 test_start 00:05:15.776 oneshot 00:05:15.776 tick 100 00:05:15.776 tick 100 00:05:15.776 tick 250 00:05:15.776 tick 100 00:05:15.776 tick 100 00:05:15.776 tick 100 00:05:15.776 tick 250 00:05:15.776 tick 500 00:05:15.776 tick 100 00:05:15.776 tick 100 00:05:15.776 tick 250 00:05:15.776 tick 100 00:05:15.776 tick 100 00:05:15.776 test_end 00:05:15.776 00:05:15.776 real 0m1.227s 00:05:15.776 user 0m1.134s 00:05:15.776 sys 0m0.089s 00:05:15.776 21:48:59 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.776 21:48:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:15.776 ************************************ 00:05:15.776 END TEST event_reactor 00:05:15.776 ************************************ 00:05:15.776 21:48:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.776 21:48:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:15.776 21:48:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.776 21:48:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.776 ************************************ 00:05:15.776 START TEST event_reactor_perf 00:05:15.776 ************************************ 00:05:15.776 21:48:59 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.776 [2024-09-30 21:48:59.929699] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:15.776 [2024-09-30 21:48:59.929785] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035641 ] 00:05:15.776 [2024-09-30 21:49:00.000069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.776 [2024-09-30 21:49:00.086591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.150 test_start 00:05:17.150 test_end 00:05:17.150 Performance: 905140 events per second 00:05:17.150 00:05:17.150 real 0m1.241s 00:05:17.150 user 0m1.142s 00:05:17.150 sys 0m0.094s 00:05:17.150 21:49:01 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.150 21:49:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.150 ************************************ 00:05:17.150 END TEST event_reactor_perf 00:05:17.150 ************************************ 00:05:17.150 21:49:01 event -- event/event.sh@49 -- # uname -s 00:05:17.150 21:49:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:17.150 21:49:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:17.150 21:49:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.150 21:49:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.150 21:49:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.150 ************************************ 00:05:17.150 START TEST event_scheduler 00:05:17.150 ************************************ 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:17.150 * Looking for test storage... 00:05:17.150 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.150 21:49:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:17.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.150 --rc genhtml_branch_coverage=1 00:05:17.150 --rc genhtml_function_coverage=1 00:05:17.150 --rc genhtml_legend=1 00:05:17.150 --rc geninfo_all_blocks=1 00:05:17.150 --rc geninfo_unexecuted_blocks=1 00:05:17.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.150 ' 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:17.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.150 --rc genhtml_branch_coverage=1 00:05:17.150 --rc genhtml_function_coverage=1 00:05:17.150 --rc genhtml_legend=1 00:05:17.150 --rc geninfo_all_blocks=1 00:05:17.150 --rc geninfo_unexecuted_blocks=1 00:05:17.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.150 ' 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:17.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.150 --rc genhtml_branch_coverage=1 00:05:17.150 --rc genhtml_function_coverage=1 00:05:17.150 --rc genhtml_legend=1 00:05:17.150 --rc geninfo_all_blocks=1 00:05:17.150 --rc geninfo_unexecuted_blocks=1 00:05:17.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.150 ' 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:17.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.150 --rc genhtml_branch_coverage=1 00:05:17.150 --rc genhtml_function_coverage=1 00:05:17.150 --rc genhtml_legend=1 00:05:17.150 --rc geninfo_all_blocks=1 00:05:17.150 --rc geninfo_unexecuted_blocks=1 00:05:17.150 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:17.150 ' 00:05:17.150 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:17.150 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1035963 00:05:17.150 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.150 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:17.150 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1035963 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1035963 ']' 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.150 21:49:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.150 [2024-09-30 21:49:01.464063] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:17.151 [2024-09-30 21:49:01.464142] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035963 ] 00:05:17.410 [2024-09-30 21:49:01.527988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.410 [2024-09-30 21:49:01.601505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.410 [2024-09-30 21:49:01.601593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.410 [2024-09-30 21:49:01.601677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.410 [2024-09-30 21:49:01.601680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:17.410 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 [2024-09-30 21:49:01.646306] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:17.410 [2024-09-30 21:49:01.646332] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:17.410 [2024-09-30 21:49:01.646348] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:17.410 [2024-09-30 21:49:01.646358] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:17.410 [2024-09-30 21:49:01.646368] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.410 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 [2024-09-30 21:49:01.718790] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.410 21:49:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.410 21:49:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 ************************************ 00:05:17.410 START TEST scheduler_create_thread 00:05:17.410 ************************************ 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.410 2 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.410 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 3 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 4 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 5 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 6 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 7 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 8 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 9 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 10 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.669 21:49:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.603 21:49:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.603 21:49:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:18.603 21:49:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.603 21:49:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.981 21:49:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.981 21:49:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:19.981 21:49:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:19.981 21:49:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.981 21:49:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.917 21:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.917 00:05:20.917 real 0m3.381s 00:05:20.917 user 0m0.024s 00:05:20.917 sys 0m0.006s 00:05:20.917 21:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.917 21:49:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.917 ************************************ 00:05:20.917 END TEST scheduler_create_thread 00:05:20.917 ************************************ 00:05:20.917 21:49:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:20.917 21:49:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1035963 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1035963 ']' 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1035963 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035963 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035963' 00:05:20.917 killing process with pid 1035963 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1035963 00:05:20.917 21:49:05 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1035963 00:05:21.176 [2024-09-30 21:49:05.522795] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:21.435 00:05:21.435 real 0m4.507s 00:05:21.435 user 0m7.800s 00:05:21.435 sys 0m0.432s 00:05:21.435 21:49:05 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.435 21:49:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.435 ************************************ 00:05:21.435 END TEST event_scheduler 00:05:21.435 ************************************ 00:05:21.435 21:49:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:21.435 21:49:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:21.435 21:49:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.435 21:49:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.435 21:49:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.694 ************************************ 00:05:21.694 START TEST app_repeat 00:05:21.694 ************************************ 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1036724 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1036724' 00:05:21.694 Process app_repeat pid: 1036724 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:21.694 spdk_app_start Round 0 00:05:21.694 21:49:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1036724 /var/tmp/spdk-nbd.sock 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1036724 ']' 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.694 21:49:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.694 [2024-09-30 21:49:05.869131] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:21.694 [2024-09-30 21:49:05.869209] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036724 ] 00:05:21.694 [2024-09-30 21:49:05.941199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.694 [2024-09-30 21:49:06.021316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.694 [2024-09-30 21:49:06.021316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.953 21:49:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.953 21:49:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:21.953 21:49:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.953 Malloc0 00:05:21.953 21:49:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.212 Malloc1 00:05:22.213 21:49:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.213 21:49:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.472 /dev/nbd0 00:05:22.472 21:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.472 21:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.472 1+0 records in 00:05:22.472 1+0 records out 00:05:22.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.9039e-05 s, 41.4 MB/s 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.472 21:49:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.472 21:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.472 21:49:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.472 21:49:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.731 /dev/nbd1 00:05:22.731 21:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.731 21:49:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.731 1+0 records in 00:05:22.731 1+0 records out 00:05:22.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249622 s, 16.4 MB/s 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.731 21:49:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:22.731 21:49:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.731 21:49:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.731 21:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.731 21:49:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.731 21:49:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.731 21:49:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.731 21:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.990 21:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.990 { 00:05:22.990 "nbd_device": "/dev/nbd0", 00:05:22.990 "bdev_name": "Malloc0" 00:05:22.990 }, 00:05:22.990 { 00:05:22.990 "nbd_device": "/dev/nbd1", 00:05:22.990 "bdev_name": "Malloc1" 00:05:22.991 } 00:05:22.991 ]' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.991 { 00:05:22.991 "nbd_device": "/dev/nbd0", 00:05:22.991 "bdev_name": "Malloc0" 00:05:22.991 }, 00:05:22.991 { 00:05:22.991 "nbd_device": "/dev/nbd1", 00:05:22.991 "bdev_name": "Malloc1" 00:05:22.991 } 00:05:22.991 ]' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.991 /dev/nbd1' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.991 /dev/nbd1' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.991 256+0 records in 00:05:22.991 256+0 records out 00:05:22.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105264 s, 99.6 MB/s 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.991 256+0 records in 00:05:22.991 256+0 records out 00:05:22.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198567 s, 52.8 MB/s 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.991 256+0 records in 00:05:22.991 256+0 records out 00:05:22.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021464 s, 48.9 MB/s 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.991 21:49:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.250 21:49:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.509 21:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.768 21:49:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.768 21:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.768 21:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.768 21:49:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.768 21:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.768 21:49:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.768 21:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.768 21:49:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.768 21:49:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.768 21:49:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.768 21:49:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.768 21:49:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.768 21:49:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.027 21:49:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.027 [2024-09-30 21:49:08.388033] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.285 [2024-09-30 21:49:08.454955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.285 [2024-09-30 21:49:08.454958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.285 [2024-09-30 21:49:08.494890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.285 [2024-09-30 21:49:08.494934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.577 21:49:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.577 21:49:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:27.577 spdk_app_start Round 1 00:05:27.577 21:49:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1036724 /var/tmp/spdk-nbd.sock 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1036724 ']' 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.577 21:49:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.577 21:49:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.577 Malloc0 00:05:27.577 21:49:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.577 Malloc1 00:05:27.577 21:49:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.577 21:49:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.836 /dev/nbd0 00:05:27.836 21:49:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.836 21:49:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.836 1+0 records in 00:05:27.836 1+0 records out 00:05:27.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216357 s, 18.9 MB/s 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.836 21:49:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.836 21:49:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.836 21:49:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.837 21:49:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.096 /dev/nbd1 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.096 1+0 records in 00:05:28.096 1+0 records out 00:05:28.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279756 s, 14.6 MB/s 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.096 21:49:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.096 21:49:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.355 21:49:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.355 { 00:05:28.355 "nbd_device": "/dev/nbd0", 00:05:28.355 "bdev_name": "Malloc0" 00:05:28.355 }, 00:05:28.355 { 00:05:28.355 "nbd_device": "/dev/nbd1", 00:05:28.355 "bdev_name": "Malloc1" 00:05:28.355 } 00:05:28.355 ]' 00:05:28.355 21:49:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.355 { 00:05:28.355 "nbd_device": "/dev/nbd0", 00:05:28.355 "bdev_name": "Malloc0" 00:05:28.355 }, 00:05:28.355 { 00:05:28.355 "nbd_device": "/dev/nbd1", 00:05:28.355 "bdev_name": "Malloc1" 00:05:28.355 } 00:05:28.355 ]' 00:05:28.355 21:49:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.355 21:49:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.355 /dev/nbd1' 00:05:28.355 21:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.355 /dev/nbd1' 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.356 256+0 records in 00:05:28.356 256+0 records out 00:05:28.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011534 s, 90.9 MB/s 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.356 256+0 records in 00:05:28.356 256+0 records out 00:05:28.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198536 s, 52.8 MB/s 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.356 256+0 records in 00:05:28.356 256+0 records out 00:05:28.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212779 s, 49.3 MB/s 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.356 21:49:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.615 21:49:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.874 21:49:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.133 21:49:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.133 21:49:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.133 21:49:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.392 [2024-09-30 21:49:13.656025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.393 [2024-09-30 21:49:13.728077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.393 [2024-09-30 21:49:13.728080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.652 [2024-09-30 21:49:13.769009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.652 [2024-09-30 21:49:13.769047] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.187 21:49:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.187 21:49:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.187 spdk_app_start Round 2 00:05:32.187 21:49:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1036724 /var/tmp/spdk-nbd.sock 00:05:32.187 21:49:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1036724 ']' 00:05:32.187 21:49:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.187 21:49:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.187 21:49:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.187 21:49:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.187 21:49:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.446 21:49:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.446 21:49:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.446 21:49:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.706 Malloc0 00:05:32.706 21:49:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.706 Malloc1 00:05:32.706 21:49:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.706 21:49:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.965 /dev/nbd0 00:05:32.965 21:49:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.965 21:49:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.965 1+0 records in 00:05:32.965 1+0 records out 00:05:32.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303565 s, 13.5 MB/s 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.965 21:49:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.965 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.965 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.965 21:49:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.224 /dev/nbd1 00:05:33.224 21:49:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.225 21:49:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.225 1+0 records in 00:05:33.225 1+0 records out 00:05:33.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161322 s, 25.4 MB/s 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.225 21:49:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.225 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.225 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.225 21:49:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.225 21:49:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.225 21:49:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.484 { 00:05:33.484 "nbd_device": "/dev/nbd0", 00:05:33.484 "bdev_name": "Malloc0" 00:05:33.484 }, 00:05:33.484 { 00:05:33.484 "nbd_device": "/dev/nbd1", 00:05:33.484 "bdev_name": "Malloc1" 00:05:33.484 } 00:05:33.484 ]' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.484 { 00:05:33.484 "nbd_device": "/dev/nbd0", 00:05:33.484 "bdev_name": "Malloc0" 00:05:33.484 }, 00:05:33.484 { 00:05:33.484 "nbd_device": "/dev/nbd1", 00:05:33.484 "bdev_name": "Malloc1" 00:05:33.484 } 00:05:33.484 ]' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.484 /dev/nbd1' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.484 /dev/nbd1' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.484 256+0 records in 00:05:33.484 256+0 records out 00:05:33.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110754 s, 94.7 MB/s 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.484 256+0 records in 00:05:33.484 256+0 records out 00:05:33.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203259 s, 51.6 MB/s 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.484 256+0 records in 00:05:33.484 256+0 records out 00:05:33.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211122 s, 49.7 MB/s 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.484 21:49:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.744 21:49:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.744 21:49:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.003 21:49:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.262 21:49:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.262 21:49:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.522 21:49:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.788 [2024-09-30 21:49:18.921504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.788 [2024-09-30 21:49:18.987369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.788 [2024-09-30 21:49:18.987372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.788 [2024-09-30 21:49:19.027654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.788 [2024-09-30 21:49:19.027697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.467 21:49:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1036724 /var/tmp/spdk-nbd.sock 00:05:37.467 21:49:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1036724 ']' 00:05:37.467 21:49:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.467 21:49:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.467 21:49:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.467 21:49:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.467 21:49:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.726 21:49:21 event.app_repeat -- event/event.sh@39 -- # killprocess 1036724 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1036724 ']' 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1036724 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036724 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036724' 00:05:37.726 killing process with pid 1036724 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1036724 00:05:37.726 21:49:21 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1036724 00:05:37.986 spdk_app_start is called in Round 0. 00:05:37.986 Shutdown signal received, stop current app iteration 00:05:37.986 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:37.986 spdk_app_start is called in Round 1. 00:05:37.986 Shutdown signal received, stop current app iteration 00:05:37.986 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:37.986 spdk_app_start is called in Round 2. 00:05:37.986 Shutdown signal received, stop current app iteration 00:05:37.986 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:37.986 spdk_app_start is called in Round 3. 00:05:37.986 Shutdown signal received, stop current app iteration 00:05:37.987 21:49:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:37.987 21:49:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:37.987 00:05:37.987 real 0m16.314s 00:05:37.987 user 0m34.866s 00:05:37.987 sys 0m3.143s 00:05:37.987 21:49:22 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.987 21:49:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.987 ************************************ 00:05:37.987 END TEST app_repeat 00:05:37.987 ************************************ 00:05:37.987 21:49:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:37.987 21:49:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.987 21:49:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.987 21:49:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.987 21:49:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.987 ************************************ 00:05:37.987 START TEST cpu_locks 00:05:37.987 ************************************ 00:05:37.987 21:49:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:37.987 * Looking for test storage... 00:05:37.987 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:37.987 21:49:22 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.987 21:49:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.987 21:49:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.247 21:49:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.247 --rc genhtml_branch_coverage=1 00:05:38.247 --rc genhtml_function_coverage=1 00:05:38.247 --rc genhtml_legend=1 00:05:38.247 --rc geninfo_all_blocks=1 00:05:38.247 --rc geninfo_unexecuted_blocks=1 00:05:38.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.247 ' 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.247 --rc genhtml_branch_coverage=1 00:05:38.247 --rc genhtml_function_coverage=1 00:05:38.247 --rc genhtml_legend=1 00:05:38.247 --rc geninfo_all_blocks=1 00:05:38.247 --rc geninfo_unexecuted_blocks=1 00:05:38.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.247 ' 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.247 --rc genhtml_branch_coverage=1 00:05:38.247 --rc genhtml_function_coverage=1 00:05:38.247 --rc genhtml_legend=1 00:05:38.247 --rc geninfo_all_blocks=1 00:05:38.247 --rc geninfo_unexecuted_blocks=1 00:05:38.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.247 ' 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:38.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.247 --rc genhtml_branch_coverage=1 00:05:38.247 --rc genhtml_function_coverage=1 00:05:38.247 --rc genhtml_legend=1 00:05:38.247 --rc geninfo_all_blocks=1 00:05:38.247 --rc geninfo_unexecuted_blocks=1 00:05:38.247 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:38.247 ' 00:05:38.247 21:49:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.247 21:49:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.247 21:49:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.247 21:49:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.247 21:49:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.247 ************************************ 00:05:38.247 START TEST default_locks 00:05:38.247 ************************************ 00:05:38.247 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1039767 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1039767 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1039767 ']' 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.248 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.248 [2024-09-30 21:49:22.500440] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:38.248 [2024-09-30 21:49:22.500504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039767 ] 00:05:38.248 [2024-09-30 21:49:22.568636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.507 [2024-09-30 21:49:22.646215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.507 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.507 21:49:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:38.507 21:49:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1039767 00:05:38.507 21:49:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1039767 00:05:38.507 21:49:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.444 lslocks: write error 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1039767 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1039767 ']' 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1039767 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1039767 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1039767' 00:05:39.445 killing process with pid 1039767 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1039767 00:05:39.445 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1039767 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1039767 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1039767 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1039767 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1039767 ']' 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.706 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1039767) - No such process 00:05:39.706 ERROR: process (pid: 1039767) is no longer running 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.706 00:05:39.706 real 0m1.374s 00:05:39.706 user 0m1.367s 00:05:39.706 sys 0m0.621s 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.706 21:49:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.706 ************************************ 00:05:39.706 END TEST default_locks 00:05:39.706 ************************************ 00:05:39.706 21:49:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:39.706 21:49:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.706 21:49:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.706 21:49:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.706 ************************************ 00:05:39.706 START TEST default_locks_via_rpc 00:05:39.706 ************************************ 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1040057 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1040057 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1040057 ']' 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.706 21:49:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.706 [2024-09-30 21:49:23.950345] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:39.706 [2024-09-30 21:49:23.950412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040057 ] 00:05:39.707 [2024-09-30 21:49:24.018827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.965 [2024-09-30 21:49:24.098322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1040057 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1040057 00:05:39.965 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1040057 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1040057 ']' 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1040057 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040057 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040057' 00:05:40.531 killing process with pid 1040057 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1040057 00:05:40.531 21:49:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1040057 00:05:40.790 00:05:40.790 real 0m1.104s 00:05:40.790 user 0m1.070s 00:05:40.790 sys 0m0.516s 00:05:40.790 21:49:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.790 21:49:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.790 ************************************ 00:05:40.790 END TEST default_locks_via_rpc 00:05:40.790 ************************************ 00:05:40.790 21:49:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.790 21:49:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.790 21:49:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.790 21:49:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.790 ************************************ 00:05:40.790 START TEST non_locking_app_on_locked_coremask 00:05:40.790 ************************************ 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1040353 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1040353 /var/tmp/spdk.sock 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1040353 ']' 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.790 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.790 [2024-09-30 21:49:25.143733] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:40.790 [2024-09-30 21:49:25.143796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040353 ] 00:05:41.049 [2024-09-30 21:49:25.210189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.049 [2024-09-30 21:49:25.278415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1040361 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1040361 /var/tmp/spdk2.sock 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1040361 ']' 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.308 21:49:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.308 [2024-09-30 21:49:25.505032] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:41.308 [2024-09-30 21:49:25.505102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040361 ] 00:05:41.308 [2024-09-30 21:49:25.595223] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.308 [2024-09-30 21:49:25.595255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.567 [2024-09-30 21:49:25.743841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.134 21:49:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.134 21:49:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:42.134 21:49:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1040353 00:05:42.134 21:49:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1040353 00:05:42.134 21:49:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.068 lslocks: write error 00:05:43.068 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1040353 00:05:43.068 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1040353 ']' 00:05:43.068 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1040353 00:05:43.068 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.068 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.068 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040353 00:05:43.069 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.069 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.069 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040353' 00:05:43.069 killing process with pid 1040353 00:05:43.069 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1040353 00:05:43.069 21:49:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1040353 00:05:43.637 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1040361 00:05:43.637 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1040361 ']' 00:05:43.637 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1040361 00:05:43.637 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040361 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040361' 00:05:43.896 killing process with pid 1040361 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1040361 00:05:43.896 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1040361 00:05:44.154 00:05:44.154 real 0m3.265s 00:05:44.154 user 0m3.438s 00:05:44.154 sys 0m1.173s 00:05:44.155 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.155 21:49:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.155 ************************************ 00:05:44.155 END TEST non_locking_app_on_locked_coremask 00:05:44.155 ************************************ 00:05:44.155 21:49:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:44.155 21:49:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.155 21:49:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.155 21:49:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.155 ************************************ 00:05:44.155 START TEST locking_app_on_unlocked_coremask 00:05:44.155 ************************************ 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1040928 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1040928 /var/tmp/spdk.sock 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1040928 ']' 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.155 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.155 [2024-09-30 21:49:28.496099] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:44.155 [2024-09-30 21:49:28.496170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040928 ] 00:05:44.413 [2024-09-30 21:49:28.561695] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.413 [2024-09-30 21:49:28.561720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.413 [2024-09-30 21:49:28.629863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.672 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.672 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1041021 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1041021 /var/tmp/spdk2.sock 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1041021 ']' 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.673 21:49:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.673 [2024-09-30 21:49:28.874943] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:44.673 [2024-09-30 21:49:28.875008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041021 ] 00:05:44.673 [2024-09-30 21:49:28.970116] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.931 [2024-09-30 21:49:29.113671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.498 21:49:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.498 21:49:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.498 21:49:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1041021 00:05:45.498 21:49:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.498 21:49:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1041021 00:05:46.876 lslocks: write error 00:05:46.876 21:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1040928 00:05:46.876 21:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1040928 ']' 00:05:46.876 21:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1040928 00:05:46.876 21:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:46.876 21:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.876 21:49:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040928 00:05:46.876 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.876 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.876 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040928' 00:05:46.876 killing process with pid 1040928 00:05:46.876 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1040928 00:05:46.876 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1040928 00:05:47.443 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1041021 00:05:47.443 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1041021 ']' 00:05:47.443 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1041021 00:05:47.443 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.443 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.444 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1041021 00:05:47.444 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.444 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.444 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1041021' 00:05:47.444 killing process with pid 1041021 00:05:47.444 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1041021 00:05:47.444 21:49:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1041021 00:05:47.703 00:05:47.703 real 0m3.591s 00:05:47.703 user 0m3.715s 00:05:47.703 sys 0m1.366s 00:05:47.703 21:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.703 21:49:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.703 ************************************ 00:05:47.703 END TEST locking_app_on_unlocked_coremask 00:05:47.703 ************************************ 00:05:47.963 21:49:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:47.963 21:49:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.963 21:49:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.963 21:49:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.963 ************************************ 00:05:47.963 START TEST locking_app_on_locked_coremask 00:05:47.963 ************************************ 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1041686 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1041686 /var/tmp/spdk.sock 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1041686 ']' 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.963 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.963 [2024-09-30 21:49:32.162402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:47.963 [2024-09-30 21:49:32.162459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041686 ] 00:05:47.963 [2024-09-30 21:49:32.229945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.963 [2024-09-30 21:49:32.306352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1041755 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1041755 /var/tmp/spdk2.sock 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1041755 /var/tmp/spdk2.sock 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1041755 /var/tmp/spdk2.sock 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1041755 ']' 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.223 21:49:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.223 [2024-09-30 21:49:32.524603] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.223 [2024-09-30 21:49:32.524671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041755 ] 00:05:48.482 [2024-09-30 21:49:32.608533] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1041686 has claimed it. 00:05:48.482 [2024-09-30 21:49:32.608562] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.051 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1041755) - No such process 00:05:49.051 ERROR: process (pid: 1041755) is no longer running 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1041686 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1041686 00:05:49.051 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.620 lslocks: write error 00:05:49.620 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1041686 00:05:49.620 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1041686 ']' 00:05:49.620 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1041686 00:05:49.620 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.620 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.620 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1041686 00:05:49.621 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.621 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.621 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1041686' 00:05:49.621 killing process with pid 1041686 00:05:49.621 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1041686 00:05:49.621 21:49:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1041686 00:05:49.880 00:05:49.880 real 0m2.110s 00:05:49.880 user 0m2.215s 00:05:49.880 sys 0m0.754s 00:05:49.880 21:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.880 21:49:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.880 ************************************ 00:05:49.880 END TEST locking_app_on_locked_coremask 00:05:49.880 ************************************ 00:05:50.139 21:49:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:50.139 21:49:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.139 21:49:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.139 21:49:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.139 ************************************ 00:05:50.139 START TEST locking_overlapped_coremask 00:05:50.139 ************************************ 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1042063 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1042063 /var/tmp/spdk.sock 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1042063 ']' 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.140 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.140 [2024-09-30 21:49:34.356325] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:50.140 [2024-09-30 21:49:34.356387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042063 ] 00:05:50.140 [2024-09-30 21:49:34.423184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.140 [2024-09-30 21:49:34.492426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.140 [2024-09-30 21:49:34.492521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.140 [2024-09-30 21:49:34.492522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1042068 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1042068 /var/tmp/spdk2.sock 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1042068 /var/tmp/spdk2.sock 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.399 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1042068 /var/tmp/spdk2.sock 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1042068 ']' 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.400 21:49:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.400 [2024-09-30 21:49:34.727403] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:50.400 [2024-09-30 21:49:34.727464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042068 ] 00:05:50.659 [2024-09-30 21:49:34.820435] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1042063 has claimed it. 00:05:50.659 [2024-09-30 21:49:34.820478] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.229 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1042068) - No such process 00:05:51.229 ERROR: process (pid: 1042068) is no longer running 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1042063 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1042063 ']' 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1042063 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1042063 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1042063' 00:05:51.229 killing process with pid 1042063 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1042063 00:05:51.229 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1042063 00:05:51.488 00:05:51.488 real 0m1.450s 00:05:51.488 user 0m3.920s 00:05:51.488 sys 0m0.444s 00:05:51.489 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.489 21:49:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.489 ************************************ 00:05:51.489 END TEST locking_overlapped_coremask 00:05:51.489 ************************************ 00:05:51.489 21:49:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:51.489 21:49:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.489 21:49:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.489 21:49:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.748 ************************************ 00:05:51.748 START TEST locking_overlapped_coremask_via_rpc 00:05:51.748 ************************************ 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1042358 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1042358 /var/tmp/spdk.sock 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1042358 ']' 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.748 21:49:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.748 [2024-09-30 21:49:35.892820] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:51.748 [2024-09-30 21:49:35.892882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042358 ] 00:05:51.748 [2024-09-30 21:49:35.960367] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.748 [2024-09-30 21:49:35.960394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.748 [2024-09-30 21:49:36.029670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.748 [2024-09-30 21:49:36.029765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.748 [2024-09-30 21:49:36.029768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1042367 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1042367 /var/tmp/spdk2.sock 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1042367 ']' 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.007 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.008 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.008 21:49:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.008 [2024-09-30 21:49:36.266203] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:52.008 [2024-09-30 21:49:36.266263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042367 ] 00:05:52.008 [2024-09-30 21:49:36.360760] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.008 [2024-09-30 21:49:36.360794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.267 [2024-09-30 21:49:36.512603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.267 [2024-09-30 21:49:36.512726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.267 [2024-09-30 21:49:36.512726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.836 [2024-09-30 21:49:37.138382] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1042358 has claimed it. 00:05:52.836 request: 00:05:52.836 { 00:05:52.836 "method": "framework_enable_cpumask_locks", 00:05:52.836 "req_id": 1 00:05:52.836 } 00:05:52.836 Got JSON-RPC error response 00:05:52.836 response: 00:05:52.836 { 00:05:52.836 "code": -32603, 00:05:52.836 "message": "Failed to claim CPU core: 2" 00:05:52.836 } 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1042358 /var/tmp/spdk.sock 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1042358 ']' 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.836 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1042367 /var/tmp/spdk2.sock 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1042367 ']' 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.096 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.355 00:05:53.355 real 0m1.691s 00:05:53.355 user 0m0.784s 00:05:53.355 sys 0m0.162s 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.355 21:49:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.355 ************************************ 00:05:53.355 END TEST locking_overlapped_coremask_via_rpc 00:05:53.355 ************************************ 00:05:53.355 21:49:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:53.355 21:49:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1042358 ]] 00:05:53.355 21:49:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1042358 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1042358 ']' 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1042358 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1042358 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1042358' 00:05:53.355 killing process with pid 1042358 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1042358 00:05:53.355 21:49:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1042358 00:05:53.924 21:49:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1042367 ]] 00:05:53.924 21:49:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1042367 00:05:53.924 21:49:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1042367 ']' 00:05:53.924 21:49:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1042367 00:05:53.924 21:49:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.924 21:49:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.924 21:49:38 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1042367 00:05:53.924 21:49:38 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:53.924 21:49:38 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:53.924 21:49:38 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1042367' 00:05:53.924 killing process with pid 1042367 00:05:53.924 21:49:38 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1042367 00:05:53.924 21:49:38 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1042367 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1042358 ]] 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1042358 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1042358 ']' 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1042358 00:05:54.184 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1042358) - No such process 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1042358 is not found' 00:05:54.184 Process with pid 1042358 is not found 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1042367 ]] 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1042367 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1042367 ']' 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1042367 00:05:54.184 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1042367) - No such process 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1042367 is not found' 00:05:54.184 Process with pid 1042367 is not found 00:05:54.184 21:49:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.184 00:05:54.184 real 0m16.154s 00:05:54.184 user 0m26.341s 00:05:54.184 sys 0m6.151s 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.184 21:49:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 ************************************ 00:05:54.184 END TEST cpu_locks 00:05:54.184 ************************************ 00:05:54.184 00:05:54.184 real 0m41.386s 00:05:54.184 user 1m15.681s 00:05:54.184 sys 0m10.502s 00:05:54.184 21:49:38 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.184 21:49:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 ************************************ 00:05:54.184 END TEST event 00:05:54.184 ************************************ 00:05:54.184 21:49:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:54.184 21:49:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.184 21:49:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.184 21:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:54.184 ************************************ 00:05:54.184 START TEST thread 00:05:54.184 ************************************ 00:05:54.184 21:49:38 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:05:54.445 * Looking for test storage... 00:05:54.445 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.445 21:49:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.445 21:49:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.445 21:49:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.445 21:49:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.445 21:49:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.445 21:49:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.445 21:49:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.445 21:49:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.445 21:49:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.445 21:49:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.445 21:49:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.445 21:49:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:54.445 21:49:38 thread -- scripts/common.sh@345 -- # : 1 00:05:54.445 21:49:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.445 21:49:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.445 21:49:38 thread -- scripts/common.sh@365 -- # decimal 1 00:05:54.445 21:49:38 thread -- scripts/common.sh@353 -- # local d=1 00:05:54.445 21:49:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.445 21:49:38 thread -- scripts/common.sh@355 -- # echo 1 00:05:54.445 21:49:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.445 21:49:38 thread -- scripts/common.sh@366 -- # decimal 2 00:05:54.445 21:49:38 thread -- scripts/common.sh@353 -- # local d=2 00:05:54.445 21:49:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.445 21:49:38 thread -- scripts/common.sh@355 -- # echo 2 00:05:54.445 21:49:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.445 21:49:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.445 21:49:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.445 21:49:38 thread -- scripts/common.sh@368 -- # return 0 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.445 --rc genhtml_branch_coverage=1 00:05:54.445 --rc genhtml_function_coverage=1 00:05:54.445 --rc genhtml_legend=1 00:05:54.445 --rc geninfo_all_blocks=1 00:05:54.445 --rc geninfo_unexecuted_blocks=1 00:05:54.445 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.445 ' 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.445 --rc genhtml_branch_coverage=1 00:05:54.445 --rc genhtml_function_coverage=1 00:05:54.445 --rc genhtml_legend=1 00:05:54.445 --rc geninfo_all_blocks=1 00:05:54.445 --rc geninfo_unexecuted_blocks=1 00:05:54.445 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.445 ' 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.445 --rc genhtml_branch_coverage=1 00:05:54.445 --rc genhtml_function_coverage=1 00:05:54.445 --rc genhtml_legend=1 00:05:54.445 --rc geninfo_all_blocks=1 00:05:54.445 --rc geninfo_unexecuted_blocks=1 00:05:54.445 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.445 ' 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.445 --rc genhtml_branch_coverage=1 00:05:54.445 --rc genhtml_function_coverage=1 00:05:54.445 --rc genhtml_legend=1 00:05:54.445 --rc geninfo_all_blocks=1 00:05:54.445 --rc geninfo_unexecuted_blocks=1 00:05:54.445 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:54.445 ' 00:05:54.445 21:49:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.445 21:49:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.445 ************************************ 00:05:54.445 START TEST thread_poller_perf 00:05:54.445 ************************************ 00:05:54.445 21:49:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.445 [2024-09-30 21:49:38.771047] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:54.445 [2024-09-30 21:49:38.771129] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043001 ] 00:05:54.705 [2024-09-30 21:49:38.841666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.705 [2024-09-30 21:49:38.914287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.705 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:55.643 ====================================== 00:05:55.643 busy:2503765860 (cyc) 00:05:55.643 total_run_count: 824000 00:05:55.643 tsc_hz: 2500000000 (cyc) 00:05:55.643 ====================================== 00:05:55.643 poller_cost: 3038 (cyc), 1215 (nsec) 00:05:55.643 00:05:55.643 real 0m1.229s 00:05:55.643 user 0m1.142s 00:05:55.643 sys 0m0.083s 00:05:55.643 21:49:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.643 21:49:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.643 ************************************ 00:05:55.643 END TEST thread_poller_perf 00:05:55.643 ************************************ 00:05:55.903 21:49:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.903 21:49:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:55.903 21:49:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.903 21:49:40 thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.903 ************************************ 00:05:55.903 START TEST thread_poller_perf 00:05:55.903 ************************************ 00:05:55.903 21:49:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:55.903 [2024-09-30 21:49:40.085386] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:55.903 [2024-09-30 21:49:40.085474] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043283 ] 00:05:55.903 [2024-09-30 21:49:40.156490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.903 [2024-09-30 21:49:40.231529] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.903 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:57.284 ====================================== 00:05:57.284 busy:2501321148 (cyc) 00:05:57.284 total_run_count: 12772000 00:05:57.284 tsc_hz: 2500000000 (cyc) 00:05:57.284 ====================================== 00:05:57.284 poller_cost: 195 (cyc), 78 (nsec) 00:05:57.284 00:05:57.284 real 0m1.230s 00:05:57.284 user 0m1.143s 00:05:57.284 sys 0m0.083s 00:05:57.284 21:49:41 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.284 21:49:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.284 ************************************ 00:05:57.284 END TEST thread_poller_perf 00:05:57.285 ************************************ 00:05:57.285 21:49:41 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:05:57.285 21:49:41 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:57.285 21:49:41 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.285 21:49:41 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.285 21:49:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.285 ************************************ 00:05:57.285 START TEST thread_spdk_lock 00:05:57.285 ************************************ 00:05:57.285 21:49:41 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:05:57.285 [2024-09-30 21:49:41.405249] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:57.285 [2024-09-30 21:49:41.405342] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043537 ] 00:05:57.285 [2024-09-30 21:49:41.476693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.285 [2024-09-30 21:49:41.551221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.285 [2024-09-30 21:49:41.551224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.855 [2024-09-30 21:49:42.037827] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 967:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:57.855 [2024-09-30 21:49:42.037878] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3080:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:05:57.855 [2024-09-30 21:49:42.037888] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3035:sspin_stacks_print: *ERROR*: spinlock 0x14c1940 00:05:57.855 [2024-09-30 21:49:42.038791] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 862:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:57.855 [2024-09-30 21:49:42.038897] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1028:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:57.855 [2024-09-30 21:49:42.038916] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 862:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:05:57.855 Starting test contend 00:05:57.855 Worker Delay Wait us Hold us Total us 00:05:57.855 0 3 163798 183871 347669 00:05:57.855 1 5 80629 283504 364133 00:05:57.855 PASS test contend 00:05:57.855 Starting test hold_by_poller 00:05:57.855 PASS test hold_by_poller 00:05:57.855 Starting test hold_by_message 00:05:57.855 PASS test hold_by_message 00:05:57.855 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:05:57.855 100014 assertions passed 00:05:57.855 0 assertions failed 00:05:57.855 00:05:57.855 real 0m0.716s 00:05:57.855 user 0m1.102s 00:05:57.855 sys 0m0.098s 00:05:57.855 21:49:42 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.855 21:49:42 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:05:57.855 ************************************ 00:05:57.855 END TEST thread_spdk_lock 00:05:57.855 ************************************ 00:05:57.855 00:05:57.855 real 0m3.622s 00:05:57.855 user 0m3.587s 00:05:57.855 sys 0m0.544s 00:05:57.855 21:49:42 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.855 21:49:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.855 ************************************ 00:05:57.855 END TEST thread 00:05:57.855 ************************************ 00:05:57.855 21:49:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:57.855 21:49:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:57.855 21:49:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.855 21:49:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.855 21:49:42 -- common/autotest_common.sh@10 -- # set +x 00:05:57.855 ************************************ 00:05:57.855 START TEST app_cmdline 00:05:57.855 ************************************ 00:05:57.855 21:49:42 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:05:58.115 * Looking for test storage... 00:05:58.115 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.115 21:49:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.115 --rc genhtml_branch_coverage=1 00:05:58.115 --rc genhtml_function_coverage=1 00:05:58.115 --rc genhtml_legend=1 00:05:58.115 --rc geninfo_all_blocks=1 00:05:58.115 --rc geninfo_unexecuted_blocks=1 00:05:58.115 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.115 ' 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.115 --rc genhtml_branch_coverage=1 00:05:58.115 --rc genhtml_function_coverage=1 00:05:58.115 --rc genhtml_legend=1 00:05:58.115 --rc geninfo_all_blocks=1 00:05:58.115 --rc geninfo_unexecuted_blocks=1 00:05:58.115 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.115 ' 00:05:58.115 21:49:42 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.115 --rc genhtml_branch_coverage=1 00:05:58.115 --rc genhtml_function_coverage=1 00:05:58.115 --rc genhtml_legend=1 00:05:58.115 --rc geninfo_all_blocks=1 00:05:58.115 --rc geninfo_unexecuted_blocks=1 00:05:58.115 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.115 ' 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.116 --rc genhtml_branch_coverage=1 00:05:58.116 --rc genhtml_function_coverage=1 00:05:58.116 --rc genhtml_legend=1 00:05:58.116 --rc geninfo_all_blocks=1 00:05:58.116 --rc geninfo_unexecuted_blocks=1 00:05:58.116 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.116 ' 00:05:58.116 21:49:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:58.116 21:49:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1043650 00:05:58.116 21:49:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1043650 00:05:58.116 21:49:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1043650 ']' 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.116 21:49:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.116 [2024-09-30 21:49:42.424472] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:58.116 [2024-09-30 21:49:42.424556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043650 ] 00:05:58.375 [2024-09-30 21:49:42.492375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.375 [2024-09-30 21:49:42.567678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.635 21:49:42 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.635 21:49:42 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:58.635 { 00:05:58.635 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:05:58.635 "fields": { 00:05:58.635 "major": 25, 00:05:58.635 "minor": 1, 00:05:58.635 "patch": 0, 00:05:58.635 "suffix": "-pre", 00:05:58.635 "commit": "09cc66129" 00:05:58.635 } 00:05:58.635 } 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:58.635 21:49:42 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:58.635 21:49:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.635 21:49:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:58.635 21:49:42 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.894 21:49:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:58.894 21:49:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:58.894 21:49:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.894 request: 00:05:58.894 { 00:05:58.894 "method": "env_dpdk_get_mem_stats", 00:05:58.894 "req_id": 1 00:05:58.894 } 00:05:58.894 Got JSON-RPC error response 00:05:58.894 response: 00:05:58.894 { 00:05:58.894 "code": -32601, 00:05:58.894 "message": "Method not found" 00:05:58.894 } 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.894 21:49:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1043650 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1043650 ']' 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1043650 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.894 21:49:43 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043650 00:05:59.154 21:49:43 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.154 21:49:43 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.154 21:49:43 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043650' 00:05:59.154 killing process with pid 1043650 00:05:59.154 21:49:43 app_cmdline -- common/autotest_common.sh@969 -- # kill 1043650 00:05:59.154 21:49:43 app_cmdline -- common/autotest_common.sh@974 -- # wait 1043650 00:05:59.413 00:05:59.413 real 0m1.381s 00:05:59.413 user 0m1.559s 00:05:59.413 sys 0m0.499s 00:05:59.414 21:49:43 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.414 21:49:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.414 ************************************ 00:05:59.414 END TEST app_cmdline 00:05:59.414 ************************************ 00:05:59.414 21:49:43 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:59.414 21:49:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.414 21:49:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.414 21:49:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.414 ************************************ 00:05:59.414 START TEST version 00:05:59.414 ************************************ 00:05:59.414 21:49:43 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:05:59.414 * Looking for test storage... 00:05:59.673 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:05:59.673 21:49:43 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.673 21:49:43 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.673 21:49:43 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.673 21:49:43 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.673 21:49:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.673 21:49:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.673 21:49:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.673 21:49:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.673 21:49:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.673 21:49:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.673 21:49:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.673 21:49:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.673 21:49:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.673 21:49:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.673 21:49:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.674 21:49:43 version -- scripts/common.sh@344 -- # case "$op" in 00:05:59.674 21:49:43 version -- scripts/common.sh@345 -- # : 1 00:05:59.674 21:49:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.674 21:49:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.674 21:49:43 version -- scripts/common.sh@365 -- # decimal 1 00:05:59.674 21:49:43 version -- scripts/common.sh@353 -- # local d=1 00:05:59.674 21:49:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.674 21:49:43 version -- scripts/common.sh@355 -- # echo 1 00:05:59.674 21:49:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.674 21:49:43 version -- scripts/common.sh@366 -- # decimal 2 00:05:59.674 21:49:43 version -- scripts/common.sh@353 -- # local d=2 00:05:59.674 21:49:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.674 21:49:43 version -- scripts/common.sh@355 -- # echo 2 00:05:59.674 21:49:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.674 21:49:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.674 21:49:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.674 21:49:43 version -- scripts/common.sh@368 -- # return 0 00:05:59.674 21:49:43 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.674 21:49:43 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.674 --rc genhtml_branch_coverage=1 00:05:59.674 --rc genhtml_function_coverage=1 00:05:59.674 --rc genhtml_legend=1 00:05:59.674 --rc geninfo_all_blocks=1 00:05:59.674 --rc geninfo_unexecuted_blocks=1 00:05:59.674 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.674 ' 00:05:59.674 21:49:43 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.674 --rc genhtml_branch_coverage=1 00:05:59.674 --rc genhtml_function_coverage=1 00:05:59.674 --rc genhtml_legend=1 00:05:59.674 --rc geninfo_all_blocks=1 00:05:59.674 --rc geninfo_unexecuted_blocks=1 00:05:59.674 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.674 ' 00:05:59.674 21:49:43 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.674 --rc genhtml_branch_coverage=1 00:05:59.674 --rc genhtml_function_coverage=1 00:05:59.674 --rc genhtml_legend=1 00:05:59.674 --rc geninfo_all_blocks=1 00:05:59.674 --rc geninfo_unexecuted_blocks=1 00:05:59.674 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.674 ' 00:05:59.674 21:49:43 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.674 --rc genhtml_branch_coverage=1 00:05:59.674 --rc genhtml_function_coverage=1 00:05:59.674 --rc genhtml_legend=1 00:05:59.674 --rc geninfo_all_blocks=1 00:05:59.674 --rc geninfo_unexecuted_blocks=1 00:05:59.674 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.674 ' 00:05:59.674 21:49:43 version -- app/version.sh@17 -- # get_header_version major 00:05:59.674 21:49:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # cut -f2 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.674 21:49:43 version -- app/version.sh@17 -- # major=25 00:05:59.674 21:49:43 version -- app/version.sh@18 -- # get_header_version minor 00:05:59.674 21:49:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # cut -f2 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.674 21:49:43 version -- app/version.sh@18 -- # minor=1 00:05:59.674 21:49:43 version -- app/version.sh@19 -- # get_header_version patch 00:05:59.674 21:49:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # cut -f2 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.674 21:49:43 version -- app/version.sh@19 -- # patch=0 00:05:59.674 21:49:43 version -- app/version.sh@20 -- # get_header_version suffix 00:05:59.674 21:49:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # cut -f2 00:05:59.674 21:49:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.674 21:49:43 version -- app/version.sh@20 -- # suffix=-pre 00:05:59.674 21:49:43 version -- app/version.sh@22 -- # version=25.1 00:05:59.674 21:49:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:59.674 21:49:43 version -- app/version.sh@28 -- # version=25.1rc0 00:05:59.674 21:49:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:05:59.674 21:49:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:59.674 21:49:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:59.674 21:49:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:59.674 00:05:59.674 real 0m0.251s 00:05:59.674 user 0m0.148s 00:05:59.674 sys 0m0.153s 00:05:59.674 21:49:43 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.674 21:49:43 version -- common/autotest_common.sh@10 -- # set +x 00:05:59.674 ************************************ 00:05:59.674 END TEST version 00:05:59.674 ************************************ 00:05:59.674 21:49:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:59.674 21:49:43 -- spdk/autotest.sh@194 -- # uname -s 00:05:59.674 21:49:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:59.674 21:49:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:59.674 21:49:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:59.674 21:49:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:43 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:43 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:59.674 21:49:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.674 21:49:43 -- common/autotest_common.sh@10 -- # set +x 00:05:59.674 21:49:44 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:05:59.674 21:49:44 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:05:59.674 21:49:44 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:05:59.674 21:49:44 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:05:59.674 21:49:44 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:59.674 21:49:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.674 21:49:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.674 21:49:44 -- common/autotest_common.sh@10 -- # set +x 00:05:59.933 ************************************ 00:05:59.933 START TEST llvm_fuzz 00:05:59.933 ************************************ 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:05:59.933 * Looking for test storage... 00:05:59.933 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.933 21:49:44 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.933 --rc genhtml_branch_coverage=1 00:05:59.933 --rc genhtml_function_coverage=1 00:05:59.933 --rc genhtml_legend=1 00:05:59.933 --rc geninfo_all_blocks=1 00:05:59.933 --rc geninfo_unexecuted_blocks=1 00:05:59.933 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.933 ' 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.933 --rc genhtml_branch_coverage=1 00:05:59.933 --rc genhtml_function_coverage=1 00:05:59.933 --rc genhtml_legend=1 00:05:59.933 --rc geninfo_all_blocks=1 00:05:59.933 --rc geninfo_unexecuted_blocks=1 00:05:59.933 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.933 ' 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.933 --rc genhtml_branch_coverage=1 00:05:59.933 --rc genhtml_function_coverage=1 00:05:59.933 --rc genhtml_legend=1 00:05:59.933 --rc geninfo_all_blocks=1 00:05:59.933 --rc geninfo_unexecuted_blocks=1 00:05:59.933 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.933 ' 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.933 --rc genhtml_branch_coverage=1 00:05:59.933 --rc genhtml_function_coverage=1 00:05:59.933 --rc genhtml_legend=1 00:05:59.933 --rc geninfo_all_blocks=1 00:05:59.933 --rc geninfo_unexecuted_blocks=1 00:05:59.933 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:59.933 ' 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:05:59.933 21:49:44 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.933 21:49:44 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:05:59.933 ************************************ 00:05:59.933 START TEST nvmf_llvm_fuzz 00:05:59.933 ************************************ 00:05:59.933 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:06:00.194 * Looking for test storage... 00:06:00.194 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.194 --rc genhtml_branch_coverage=1 00:06:00.194 --rc genhtml_function_coverage=1 00:06:00.194 --rc genhtml_legend=1 00:06:00.194 --rc geninfo_all_blocks=1 00:06:00.194 --rc geninfo_unexecuted_blocks=1 00:06:00.194 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.194 ' 00:06:00.194 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.195 --rc genhtml_branch_coverage=1 00:06:00.195 --rc genhtml_function_coverage=1 00:06:00.195 --rc genhtml_legend=1 00:06:00.195 --rc geninfo_all_blocks=1 00:06:00.195 --rc geninfo_unexecuted_blocks=1 00:06:00.195 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.195 ' 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.195 --rc genhtml_branch_coverage=1 00:06:00.195 --rc genhtml_function_coverage=1 00:06:00.195 --rc genhtml_legend=1 00:06:00.195 --rc geninfo_all_blocks=1 00:06:00.195 --rc geninfo_unexecuted_blocks=1 00:06:00.195 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.195 ' 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.195 --rc genhtml_branch_coverage=1 00:06:00.195 --rc genhtml_function_coverage=1 00:06:00.195 --rc genhtml_legend=1 00:06:00.195 --rc geninfo_all_blocks=1 00:06:00.195 --rc geninfo_unexecuted_blocks=1 00:06:00.195 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.195 ' 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:06:00.195 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:00.196 #define SPDK_CONFIG_H 00:06:00.196 #define SPDK_CONFIG_AIO_FSDEV 1 00:06:00.196 #define SPDK_CONFIG_APPS 1 00:06:00.196 #define SPDK_CONFIG_ARCH native 00:06:00.196 #undef SPDK_CONFIG_ASAN 00:06:00.196 #undef SPDK_CONFIG_AVAHI 00:06:00.196 #undef SPDK_CONFIG_CET 00:06:00.196 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:06:00.196 #define SPDK_CONFIG_COVERAGE 1 00:06:00.196 #define SPDK_CONFIG_CROSS_PREFIX 00:06:00.196 #undef SPDK_CONFIG_CRYPTO 00:06:00.196 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:00.196 #undef SPDK_CONFIG_CUSTOMOCF 00:06:00.196 #undef SPDK_CONFIG_DAOS 00:06:00.196 #define SPDK_CONFIG_DAOS_DIR 00:06:00.196 #define SPDK_CONFIG_DEBUG 1 00:06:00.196 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:00.196 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:06:00.196 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:00.196 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:00.196 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:00.196 #undef SPDK_CONFIG_DPDK_UADK 00:06:00.196 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:06:00.196 #define SPDK_CONFIG_EXAMPLES 1 00:06:00.196 #undef SPDK_CONFIG_FC 00:06:00.196 #define SPDK_CONFIG_FC_PATH 00:06:00.196 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:00.196 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:00.196 #define SPDK_CONFIG_FSDEV 1 00:06:00.196 #undef SPDK_CONFIG_FUSE 00:06:00.196 #define SPDK_CONFIG_FUZZER 1 00:06:00.196 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:06:00.196 #undef SPDK_CONFIG_GOLANG 00:06:00.196 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:00.196 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:00.196 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:00.196 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:00.196 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:00.196 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:00.196 #undef SPDK_CONFIG_HAVE_LZ4 00:06:00.196 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:06:00.196 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:06:00.196 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:00.196 #define SPDK_CONFIG_IDXD 1 00:06:00.196 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:00.196 #undef SPDK_CONFIG_IPSEC_MB 00:06:00.196 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:00.196 #define SPDK_CONFIG_ISAL 1 00:06:00.196 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:00.196 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:00.196 #define SPDK_CONFIG_LIBDIR 00:06:00.196 #undef SPDK_CONFIG_LTO 00:06:00.196 #define SPDK_CONFIG_MAX_LCORES 128 00:06:00.196 #define SPDK_CONFIG_NVME_CUSE 1 00:06:00.196 #undef SPDK_CONFIG_OCF 00:06:00.196 #define SPDK_CONFIG_OCF_PATH 00:06:00.196 #define SPDK_CONFIG_OPENSSL_PATH 00:06:00.196 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:00.196 #define SPDK_CONFIG_PGO_DIR 00:06:00.196 #undef SPDK_CONFIG_PGO_USE 00:06:00.196 #define SPDK_CONFIG_PREFIX /usr/local 00:06:00.196 #undef SPDK_CONFIG_RAID5F 00:06:00.196 #undef SPDK_CONFIG_RBD 00:06:00.196 #define SPDK_CONFIG_RDMA 1 00:06:00.196 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:00.196 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:00.196 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:00.196 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:00.196 #undef SPDK_CONFIG_SHARED 00:06:00.196 #undef SPDK_CONFIG_SMA 00:06:00.196 #define SPDK_CONFIG_TESTS 1 00:06:00.196 #undef SPDK_CONFIG_TSAN 00:06:00.196 #define SPDK_CONFIG_UBLK 1 00:06:00.196 #define SPDK_CONFIG_UBSAN 1 00:06:00.196 #undef SPDK_CONFIG_UNIT_TESTS 00:06:00.196 #undef SPDK_CONFIG_URING 00:06:00.196 #define SPDK_CONFIG_URING_PATH 00:06:00.196 #undef SPDK_CONFIG_URING_ZNS 00:06:00.196 #undef SPDK_CONFIG_USDT 00:06:00.196 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:00.196 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:00.196 #define SPDK_CONFIG_VFIO_USER 1 00:06:00.196 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:00.196 #define SPDK_CONFIG_VHOST 1 00:06:00.196 #define SPDK_CONFIG_VIRTIO 1 00:06:00.196 #undef SPDK_CONFIG_VTUNE 00:06:00.196 #define SPDK_CONFIG_VTUNE_DIR 00:06:00.196 #define SPDK_CONFIG_WERROR 1 00:06:00.196 #define SPDK_CONFIG_WPDK_DIR 00:06:00.196 #undef SPDK_CONFIG_XNVME 00:06:00.196 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:06:00.196 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:06:00.197 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:06:00.458 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:06:00.459 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 1044256 ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 1044256 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.BiabSS 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.BiabSS/tests/nvmf /tmp/spdk.BiabSS 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=678330368 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4606099456 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=53295366144 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730631680 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8435265536 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30860550144 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865313792 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340133888 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346126336 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5992448 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864465920 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865317888 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=851968 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173048832 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173061120 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:06:00.460 * Looking for test storage... 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=53295366144 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10649858048 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:00.460 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.460 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.461 --rc genhtml_branch_coverage=1 00:06:00.461 --rc genhtml_function_coverage=1 00:06:00.461 --rc genhtml_legend=1 00:06:00.461 --rc geninfo_all_blocks=1 00:06:00.461 --rc geninfo_unexecuted_blocks=1 00:06:00.461 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.461 ' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.461 --rc genhtml_branch_coverage=1 00:06:00.461 --rc genhtml_function_coverage=1 00:06:00.461 --rc genhtml_legend=1 00:06:00.461 --rc geninfo_all_blocks=1 00:06:00.461 --rc geninfo_unexecuted_blocks=1 00:06:00.461 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.461 ' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.461 --rc genhtml_branch_coverage=1 00:06:00.461 --rc genhtml_function_coverage=1 00:06:00.461 --rc genhtml_legend=1 00:06:00.461 --rc geninfo_all_blocks=1 00:06:00.461 --rc geninfo_unexecuted_blocks=1 00:06:00.461 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.461 ' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.461 --rc genhtml_branch_coverage=1 00:06:00.461 --rc genhtml_function_coverage=1 00:06:00.461 --rc genhtml_legend=1 00:06:00.461 --rc geninfo_all_blocks=1 00:06:00.461 --rc geninfo_unexecuted_blocks=1 00:06:00.461 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:00.461 ' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:00.461 21:49:44 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:06:00.461 [2024-09-30 21:49:44.780521] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:00.461 [2024-09-30 21:49:44.780596] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044403 ] 00:06:00.721 [2024-09-30 21:49:44.960880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.721 [2024-09-30 21:49:45.026243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.721 [2024-09-30 21:49:45.085021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.978 [2024-09-30 21:49:45.101393] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:06:00.978 INFO: Running with entropic power schedule (0xFF, 100). 00:06:00.978 INFO: Seed: 2602112170 00:06:00.978 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:00.978 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:00.978 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:06:00.978 INFO: A corpus is not provided, starting from an empty corpus 00:06:00.978 #2 INITED exec/s: 0 rss: 66Mb 00:06:00.978 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:00.978 This may also happen if the target rejected all inputs we tried so far 00:06:00.979 [2024-09-30 21:49:45.149804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:00.979 [2024-09-30 21:49:45.149833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.239 NEW_FUNC[1/715]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:06:01.239 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:01.239 #32 NEW cov: 12169 ft: 12128 corp: 2/70b lim: 320 exec/s: 0 rss: 73Mb L: 69/69 MS: 5 InsertRepeatedBytes-CMP-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- DE: "\001\000\000u"- 00:06:01.239 [2024-09-30 21:49:45.480684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.239 [2024-09-30 21:49:45.480719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.239 #38 NEW cov: 12282 ft: 12784 corp: 3/139b lim: 320 exec/s: 0 rss: 73Mb L: 69/69 MS: 1 ChangeBit- 00:06:01.239 [2024-09-30 21:49:45.540739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8affffffffffffff 00:06:01.239 [2024-09-30 21:49:45.540766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.239 #39 NEW cov: 12288 ft: 13054 corp: 4/208b lim: 320 exec/s: 0 rss: 73Mb L: 69/69 MS: 1 ChangeByte- 00:06:01.239 [2024-09-30 21:49:45.580827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xff5bffffffffffff 00:06:01.239 [2024-09-30 21:49:45.580855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.239 #40 NEW cov: 12373 ft: 13419 corp: 5/277b lim: 320 exec/s: 0 rss: 73Mb L: 69/69 MS: 1 ChangeByte- 00:06:01.501 [2024-09-30 21:49:45.620957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.501 [2024-09-30 21:49:45.620983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.501 #41 NEW cov: 12373 ft: 13527 corp: 6/346b lim: 320 exec/s: 0 rss: 73Mb L: 69/69 MS: 1 ChangeBit- 00:06:01.501 [2024-09-30 21:49:45.661050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.501 [2024-09-30 21:49:45.661077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.501 #42 NEW cov: 12373 ft: 13642 corp: 7/416b lim: 320 exec/s: 0 rss: 73Mb L: 70/70 MS: 1 InsertByte- 00:06:01.501 [2024-09-30 21:49:45.721198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.501 [2024-09-30 21:49:45.721223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.501 #43 NEW cov: 12373 ft: 13706 corp: 8/486b lim: 320 exec/s: 0 rss: 74Mb L: 70/70 MS: 1 ChangeBit- 00:06:01.501 [2024-09-30 21:49:45.781344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xff8affffffffffff 00:06:01.501 [2024-09-30 21:49:45.781370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.501 #44 NEW cov: 12373 ft: 13735 corp: 9/554b lim: 320 exec/s: 0 rss: 74Mb L: 68/70 MS: 1 EraseBytes- 00:06:01.501 [2024-09-30 21:49:45.821478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:f1f1f1f1 cdw11:fffffff1 SGL TRANSPORT DATA BLOCK TRANSPORT 0xf1f1f1f1f1f1f1f1 00:06:01.501 [2024-09-30 21:49:45.821504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.501 #45 NEW cov: 12373 ft: 13768 corp: 10/646b lim: 320 exec/s: 0 rss: 74Mb L: 92/92 MS: 1 InsertRepeatedBytes- 00:06:01.501 [2024-09-30 21:49:45.861572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:30ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.501 [2024-09-30 21:49:45.861598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.759 #46 NEW cov: 12373 ft: 13821 corp: 11/716b lim: 320 exec/s: 0 rss: 74Mb L: 70/92 MS: 1 ChangeASCIIInt- 00:06:01.759 [2024-09-30 21:49:45.921743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8affffffffffffff 00:06:01.759 [2024-09-30 21:49:45.921768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.759 #47 NEW cov: 12373 ft: 13842 corp: 12/785b lim: 320 exec/s: 0 rss: 74Mb L: 69/92 MS: 1 ChangeByte- 00:06:01.759 [2024-09-30 21:49:45.961876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.759 [2024-09-30 21:49:45.961902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.759 #48 NEW cov: 12373 ft: 13874 corp: 13/855b lim: 320 exec/s: 0 rss: 74Mb L: 70/92 MS: 1 InsertByte- 00:06:01.759 [2024-09-30 21:49:46.022073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (fe) qid:0 cid:4 nsid:fffffffe cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ffffffffff 00:06:01.759 [2024-09-30 21:49:46.022098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.759 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:01.759 #53 NEW cov: 12415 ft: 13929 corp: 14/920b lim: 320 exec/s: 0 rss: 74Mb L: 65/92 MS: 5 ShuffleBytes-ChangeByte-CopyPart-CrossOver-CrossOver- 00:06:01.759 [2024-09-30 21:49:46.062277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.759 [2024-09-30 21:49:46.062302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:01.760 [2024-09-30 21:49:46.062361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffff7500 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x200000000 00:06:01.760 [2024-09-30 21:49:46.062374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:01.760 #59 NEW cov: 12415 ft: 14134 corp: 15/1059b lim: 320 exec/s: 0 rss: 74Mb L: 139/139 MS: 1 CrossOver- 00:06:01.760 [2024-09-30 21:49:46.122348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:30ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:01.760 [2024-09-30 21:49:46.122376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.018 #60 NEW cov: 12415 ft: 14148 corp: 16/1129b lim: 320 exec/s: 60 rss: 74Mb L: 70/139 MS: 1 ChangeBit- 00:06:02.018 [2024-09-30 21:49:46.182538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.018 [2024-09-30 21:49:46.182564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.018 #66 NEW cov: 12415 ft: 14163 corp: 17/1198b lim: 320 exec/s: 66 rss: 74Mb L: 69/139 MS: 1 ShuffleBytes- 00:06:02.018 [2024-09-30 21:49:46.222584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b4ffffff cdw10:00000000 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.018 [2024-09-30 21:49:46.222609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.018 #67 NEW cov: 12415 ft: 14215 corp: 18/1268b lim: 320 exec/s: 67 rss: 74Mb L: 70/139 MS: 1 InsertByte- 00:06:02.018 [2024-09-30 21:49:46.262705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ffffffffff 00:06:02.018 [2024-09-30 21:49:46.262730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.019 #68 NEW cov: 12415 ft: 14222 corp: 19/1337b lim: 320 exec/s: 68 rss: 74Mb L: 69/139 MS: 1 CopyPart- 00:06:02.019 [2024-09-30 21:49:46.302807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffff9bffffff 00:06:02.019 [2024-09-30 21:49:46.302832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.019 #74 NEW cov: 12415 ft: 14247 corp: 20/1407b lim: 320 exec/s: 74 rss: 74Mb L: 70/139 MS: 1 InsertByte- 00:06:02.019 [2024-09-30 21:49:46.342920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.019 [2024-09-30 21:49:46.342945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.019 #75 NEW cov: 12415 ft: 14274 corp: 21/1480b lim: 320 exec/s: 75 rss: 74Mb L: 73/139 MS: 1 PersAutoDict- DE: "\001\000\000u"- 00:06:02.278 [2024-09-30 21:49:46.403108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffff00ffffffffff 00:06:02.278 [2024-09-30 21:49:46.403134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.278 #76 NEW cov: 12415 ft: 14323 corp: 22/1550b lim: 320 exec/s: 76 rss: 74Mb L: 70/139 MS: 1 ChangeBinInt- 00:06:02.278 [2024-09-30 21:49:46.443214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.278 [2024-09-30 21:49:46.443240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.278 #77 NEW cov: 12415 ft: 14345 corp: 23/1624b lim: 320 exec/s: 77 rss: 74Mb L: 74/139 MS: 1 CMP- DE: "\377\377\001\000"- 00:06:02.278 [2024-09-30 21:49:46.483421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.278 [2024-09-30 21:49:46.483446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.278 [2024-09-30 21:49:46.483504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:02000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x2900000000000000 00:06:02.278 [2024-09-30 21:49:46.483521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.278 #78 NEW cov: 12415 ft: 14353 corp: 24/1758b lim: 320 exec/s: 78 rss: 74Mb L: 134/139 MS: 1 CopyPart- 00:06:02.278 [2024-09-30 21:49:46.543551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x8affffffffffffff 00:06:02.278 [2024-09-30 21:49:46.543577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.278 #79 NEW cov: 12415 ft: 14368 corp: 25/1869b lim: 320 exec/s: 79 rss: 74Mb L: 111/139 MS: 1 CopyPart- 00:06:02.278 [2024-09-30 21:49:46.603684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffff00ffffffffff 00:06:02.278 [2024-09-30 21:49:46.603710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.278 #80 NEW cov: 12415 ft: 14434 corp: 26/1939b lim: 320 exec/s: 80 rss: 74Mb L: 70/139 MS: 1 InsertByte- 00:06:02.278 [2024-09-30 21:49:46.643845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff2dffffffff 00:06:02.278 [2024-09-30 21:49:46.643870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.537 #81 NEW cov: 12415 ft: 14494 corp: 27/2010b lim: 320 exec/s: 81 rss: 74Mb L: 71/139 MS: 1 InsertByte- 00:06:02.537 [2024-09-30 21:49:46.683901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.537 [2024-09-30 21:49:46.683928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.537 #82 NEW cov: 12415 ft: 14510 corp: 28/2080b lim: 320 exec/s: 82 rss: 74Mb L: 70/139 MS: 1 ChangeBit- 00:06:02.537 [2024-09-30 21:49:46.724107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:37ffffff cdw10:ff750000 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff37ffffffff 00:06:02.537 [2024-09-30 21:49:46.724133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.537 [2024-09-30 21:49:46.724183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ffff0000 00:06:02.537 [2024-09-30 21:49:46.724198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.537 #83 NEW cov: 12417 ft: 14519 corp: 29/2214b lim: 320 exec/s: 83 rss: 75Mb L: 134/139 MS: 1 CrossOver- 00:06:02.537 [2024-09-30 21:49:46.784238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:ffffffff cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.537 [2024-09-30 21:49:46.784262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.537 NEW_FUNC[1/1]: 0x192a258 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:06:02.537 #84 NEW cov: 12431 ft: 14862 corp: 30/2287b lim: 320 exec/s: 84 rss: 75Mb L: 73/139 MS: 1 PersAutoDict- DE: "\001\000\000u"- 00:06:02.537 [2024-09-30 21:49:46.824405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.537 [2024-09-30 21:49:46.824430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.537 [2024-09-30 21:49:46.824490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffff7500 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x200000000 00:06:02.537 [2024-09-30 21:49:46.824508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:02.537 #85 NEW cov: 12431 ft: 14870 corp: 31/2426b lim: 320 exec/s: 85 rss: 75Mb L: 139/139 MS: 1 ChangeByte- 00:06:02.537 [2024-09-30 21:49:46.884490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:000000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x1ffffffffff 00:06:02.537 [2024-09-30 21:49:46.884516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.796 #86 NEW cov: 12431 ft: 14875 corp: 32/2495b lim: 320 exec/s: 86 rss: 75Mb L: 69/139 MS: 1 ChangeBit- 00:06:02.796 [2024-09-30 21:49:46.924602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:00ff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffff9bffffff 00:06:02.796 [2024-09-30 21:49:46.924627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.796 #87 NEW cov: 12431 ft: 14881 corp: 33/2565b lim: 320 exec/s: 87 rss: 75Mb L: 70/139 MS: 1 ShuffleBytes- 00:06:02.796 [2024-09-30 21:49:46.984832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:ffffffff cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:02.796 [2024-09-30 21:49:46.984857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.796 #88 NEW cov: 12431 ft: 14911 corp: 34/2638b lim: 320 exec/s: 88 rss: 75Mb L: 73/139 MS: 1 ChangeByte- 00:06:02.796 [2024-09-30 21:49:47.044930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:b4ffffff cdw10:ffff7500 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffb4ffffff 00:06:02.796 [2024-09-30 21:49:47.044955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.796 #89 NEW cov: 12431 ft: 14914 corp: 35/2708b lim: 320 exec/s: 89 rss: 75Mb L: 70/139 MS: 1 CopyPart- 00:06:02.796 [2024-09-30 21:49:47.105069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:30ffffff cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:06:02.796 [2024-09-30 21:49:47.105095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:02.796 #90 NEW cov: 12431 ft: 14949 corp: 36/2778b lim: 320 exec/s: 45 rss: 75Mb L: 70/139 MS: 1 ChangeBit- 00:06:02.796 #90 DONE cov: 12431 ft: 14949 corp: 36/2778b lim: 320 exec/s: 45 rss: 75Mb 00:06:02.796 ###### Recommended dictionary. ###### 00:06:02.796 "\001\000\000u" # Uses: 5 00:06:02.796 "\377\377\001\000" # Uses: 0 00:06:02.796 ###### End of recommended dictionary. ###### 00:06:02.797 Done 90 runs in 2 second(s) 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:03.056 21:49:47 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:06:03.056 [2024-09-30 21:49:47.318500] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:03.056 [2024-09-30 21:49:47.318575] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044764 ] 00:06:03.315 [2024-09-30 21:49:47.500541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.315 [2024-09-30 21:49:47.566050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.315 [2024-09-30 21:49:47.624891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.315 [2024-09-30 21:49:47.641275] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:06:03.315 INFO: Running with entropic power schedule (0xFF, 100). 00:06:03.315 INFO: Seed: 849146598 00:06:03.316 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:03.316 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:03.316 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:06:03.316 INFO: A corpus is not provided, starting from an empty corpus 00:06:03.316 #2 INITED exec/s: 0 rss: 64Mb 00:06:03.316 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:03.316 This may also happen if the target rejected all inputs we tried so far 00:06:03.574 [2024-09-30 21:49:47.686361] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:03.574 [2024-09-30 21:49:47.686579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:03.574 [2024-09-30 21:49:47.686611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.833 NEW_FUNC[1/714]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:06:03.833 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:03.833 #8 NEW cov: 12241 ft: 12235 corp: 2/10b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:03.833 [2024-09-30 21:49:48.017252] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:03.833 [2024-09-30 21:49:48.017494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:03.833 [2024-09-30 21:49:48.017526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.833 NEW_FUNC[1/1]: 0x190eeb8 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:06:03.833 #9 NEW cov: 12365 ft: 12885 corp: 3/19b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:03.833 [2024-09-30 21:49:48.077355] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000da3d 00:06:03.833 [2024-09-30 21:49:48.077573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fbbc8102 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:03.833 [2024-09-30 21:49:48.077600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.833 #10 NEW cov: 12377 ft: 13030 corp: 4/28b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 CMP- DE: "\373\274\0025\332=f\000"- 00:06:03.833 [2024-09-30 21:49:48.137484] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14340) > buf size (4096) 00:06:03.833 [2024-09-30 21:49:48.137718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:03.833 [2024-09-30 21:49:48.137743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.833 #11 NEW cov: 12462 ft: 13421 corp: 5/37b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:06:03.833 [2024-09-30 21:49:48.177637] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:03.833 [2024-09-30 21:49:48.177865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:03.833 [2024-09-30 21:49:48.177892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:03.833 #12 NEW cov: 12462 ft: 13596 corp: 6/46b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:04.092 [2024-09-30 21:49:48.217725] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000001 00:06:04.092 [2024-09-30 21:49:48.217946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.092 [2024-09-30 21:49:48.217972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.092 #13 NEW cov: 12462 ft: 13700 corp: 7/55b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ShuffleBytes- 00:06:04.092 [2024-09-30 21:49:48.277932] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14340) > buf size (4096) 00:06:04.092 [2024-09-30 21:49:48.278176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0000fc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.092 [2024-09-30 21:49:48.278212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.092 #14 NEW cov: 12462 ft: 13793 corp: 8/64b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:06:04.092 [2024-09-30 21:49:48.338260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.092 [2024-09-30 21:49:48.338285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.092 #15 NEW cov: 12479 ft: 13873 corp: 9/75b lim: 30 exec/s: 0 rss: 72Mb L: 11/11 MS: 1 InsertRepeatedBytes- 00:06:04.092 [2024-09-30 21:49:48.378170] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:04.092 [2024-09-30 21:49:48.378404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.092 [2024-09-30 21:49:48.378430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.092 #16 NEW cov: 12479 ft: 13896 corp: 10/85b lim: 30 exec/s: 0 rss: 72Mb L: 10/11 MS: 1 CrossOver- 00:06:04.092 [2024-09-30 21:49:48.438370] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:04.092 [2024-09-30 21:49:48.438595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.092 [2024-09-30 21:49:48.438624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.092 #17 NEW cov: 12479 ft: 14001 corp: 11/94b lim: 30 exec/s: 0 rss: 72Mb L: 9/11 MS: 1 CopyPart- 00:06:04.351 [2024-09-30 21:49:48.478446] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:04.351 [2024-09-30 21:49:48.478691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.478716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.351 #18 NEW cov: 12479 ft: 14017 corp: 12/103b lim: 30 exec/s: 0 rss: 72Mb L: 9/11 MS: 1 ChangeBinInt- 00:06:04.351 [2024-09-30 21:49:48.518621] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000035da 00:06:04.351 [2024-09-30 21:49:48.518748] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x7 00:06:04.351 [2024-09-30 21:49:48.518968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0afb02bc cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.518994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.351 [2024-09-30 21:49:48.519052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3d660000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.519066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.351 #19 NEW cov: 12479 ft: 14398 corp: 13/120b lim: 30 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 PersAutoDict- DE: "\373\274\0025\332=f\000"- 00:06:04.351 [2024-09-30 21:49:48.558696] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000035da 00:06:04.351 [2024-09-30 21:49:48.558935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0efb02bc cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.558961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.351 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:04.351 #20 NEW cov: 12502 ft: 14462 corp: 14/129b lim: 30 exec/s: 0 rss: 72Mb L: 9/17 MS: 1 PersAutoDict- DE: "\373\274\0025\332=f\000"- 00:06:04.351 [2024-09-30 21:49:48.598826] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14340) > buf size (4096) 00:06:04.351 [2024-09-30 21:49:48.598949] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:04.351 [2024-09-30 21:49:48.599170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.599196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.351 [2024-09-30 21:49:48.599253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.599268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.351 #21 NEW cov: 12502 ft: 14529 corp: 15/146b lim: 30 exec/s: 0 rss: 72Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:06:04.351 [2024-09-30 21:49:48.638929] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000da3d 00:06:04.351 [2024-09-30 21:49:48.639147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fbbc8102 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.639172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.351 #22 NEW cov: 12502 ft: 14552 corp: 16/155b lim: 30 exec/s: 22 rss: 73Mb L: 9/17 MS: 1 ChangeByte- 00:06:04.351 [2024-09-30 21:49:48.699111] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:04.351 [2024-09-30 21:49:48.699335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.351 [2024-09-30 21:49:48.699361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.351 #23 NEW cov: 12502 ft: 14572 corp: 17/165b lim: 30 exec/s: 23 rss: 73Mb L: 10/17 MS: 1 InsertByte- 00:06:04.610 [2024-09-30 21:49:48.739216] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:04.610 [2024-09-30 21:49:48.739462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.739487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.610 #24 NEW cov: 12502 ft: 14653 corp: 18/174b lim: 30 exec/s: 24 rss: 73Mb L: 9/17 MS: 1 ChangeBinInt- 00:06:04.610 [2024-09-30 21:49:48.779598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.779625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.610 #30 NEW cov: 12502 ft: 14686 corp: 19/185b lim: 30 exec/s: 30 rss: 73Mb L: 11/17 MS: 1 ChangeByte- 00:06:04.610 [2024-09-30 21:49:48.839498] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (2560) > len (4) 00:06:04.610 [2024-09-30 21:49:48.839743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.839776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.610 #31 NEW cov: 12508 ft: 14802 corp: 20/194b lim: 30 exec/s: 31 rss: 73Mb L: 9/17 MS: 1 CrossOver- 00:06:04.610 [2024-09-30 21:49:48.899672] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000007 00:06:04.610 [2024-09-30 21:49:48.899893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.899919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.610 #32 NEW cov: 12508 ft: 14830 corp: 21/203b lim: 30 exec/s: 32 rss: 73Mb L: 9/17 MS: 1 ShuffleBytes- 00:06:04.610 [2024-09-30 21:49:48.939869] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6565 00:06:04.610 [2024-09-30 21:49:48.939991] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006565 00:06:04.610 [2024-09-30 21:49:48.940102] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006565 00:06:04.610 [2024-09-30 21:49:48.940211] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (365976) > buf size (4096) 00:06:04.610 [2024-09-30 21:49:48.940447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0000fc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.940472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.610 [2024-09-30 21:49:48.940533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:65658165 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.940547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.610 [2024-09-30 21:49:48.940604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:65658165 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.940621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.610 [2024-09-30 21:49:48.940678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:65658165 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.610 [2024-09-30 21:49:48.940692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.868 #33 NEW cov: 12508 ft: 15383 corp: 22/230b lim: 30 exec/s: 33 rss: 73Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:06:04.868 [2024-09-30 21:49:48.999978] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:04.868 [2024-09-30 21:49:49.000195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.000221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.868 #35 NEW cov: 12508 ft: 15387 corp: 23/236b lim: 30 exec/s: 35 rss: 73Mb L: 6/27 MS: 2 EraseBytes-InsertByte- 00:06:04.868 [2024-09-30 21:49:49.040151] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6565 00:06:04.868 [2024-09-30 21:49:49.040272] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006565 00:06:04.868 [2024-09-30 21:49:49.040392] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100006565 00:06:04.868 [2024-09-30 21:49:49.040501] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (365976) > buf size (4096) 00:06:04.868 [2024-09-30 21:49:49.040718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e0000fc cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.040744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.868 [2024-09-30 21:49:49.040804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:65658165 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.040819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.868 [2024-09-30 21:49:49.040877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:65658165 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.040891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:04.868 [2024-09-30 21:49:49.040949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:65658165 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.040963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:04.868 #36 NEW cov: 12508 ft: 15455 corp: 24/263b lim: 30 exec/s: 36 rss: 73Mb L: 27/27 MS: 1 ShuffleBytes- 00:06:04.868 [2024-09-30 21:49:49.100220] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10284) > buf size (4096) 00:06:04.868 [2024-09-30 21:49:49.100469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.100495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.868 #37 NEW cov: 12508 ft: 15504 corp: 25/272b lim: 30 exec/s: 37 rss: 73Mb L: 9/27 MS: 1 ChangeBinInt- 00:06:04.868 [2024-09-30 21:49:49.160440] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:04.868 [2024-09-30 21:49:49.160561] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (579436) > buf size (4096) 00:06:04.868 [2024-09-30 21:49:49.160779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fbbc0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.160805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:04.868 [2024-09-30 21:49:49.160863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:35da023d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.160878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:04.868 #38 NEW cov: 12508 ft: 15510 corp: 26/284b lim: 30 exec/s: 38 rss: 73Mb L: 12/27 MS: 1 CrossOver- 00:06:04.868 [2024-09-30 21:49:49.220559] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (272388) > buf size (4096) 00:06:04.868 [2024-09-30 21:49:49.220779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:04.868 [2024-09-30 21:49:49.220805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.126 #39 NEW cov: 12508 ft: 15563 corp: 27/290b lim: 30 exec/s: 39 rss: 73Mb L: 6/27 MS: 1 ChangeByte- 00:06:05.126 [2024-09-30 21:49:49.281027] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000bc02 00:06:05.126 [2024-09-30 21:49:49.281147] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (579436) > buf size (4096) 00:06:05.126 [2024-09-30 21:49:49.281378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.281404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.126 [2024-09-30 21:49:49.281462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.281477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.126 [2024-09-30 21:49:49.281531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.281544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.126 [2024-09-30 21:49:49.281601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:35da023d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.281616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:05.126 #40 NEW cov: 12508 ft: 15601 corp: 28/314b lim: 30 exec/s: 40 rss: 73Mb L: 24/27 MS: 1 InsertRepeatedBytes- 00:06:05.126 [2024-09-30 21:49:49.320819] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000001 00:06:05.126 [2024-09-30 21:49:49.321047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a008300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.321072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.126 #41 NEW cov: 12508 ft: 15610 corp: 29/323b lim: 30 exec/s: 41 rss: 73Mb L: 9/27 MS: 1 ChangeBinInt- 00:06:05.126 [2024-09-30 21:49:49.361401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.361426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.126 [2024-09-30 21:49:49.361481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.361498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.126 [2024-09-30 21:49:49.361555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.361569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.126 #42 NEW cov: 12508 ft: 15861 corp: 30/343b lim: 30 exec/s: 42 rss: 73Mb L: 20/27 MS: 1 CopyPart- 00:06:05.126 [2024-09-30 21:49:49.401046] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:06:05.126 [2024-09-30 21:49:49.401284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.401313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.126 #43 NEW cov: 12508 ft: 15887 corp: 31/352b lim: 30 exec/s: 43 rss: 73Mb L: 9/27 MS: 1 ChangeBinInt- 00:06:05.126 [2024-09-30 21:49:49.461256] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (50220) > buf size (4096) 00:06:05.126 [2024-09-30 21:49:49.461484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:310a000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.126 [2024-09-30 21:49:49.461510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.126 #44 NEW cov: 12508 ft: 15893 corp: 32/362b lim: 30 exec/s: 44 rss: 74Mb L: 10/27 MS: 1 InsertByte- 00:06:05.386 [2024-09-30 21:49:49.501450] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:05.386 [2024-09-30 21:49:49.501570] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:05.386 [2024-09-30 21:49:49.501687] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786432) > buf size (4096) 00:06:05.386 [2024-09-30 21:49:49.501903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:fbbc0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.501928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.386 [2024-09-30 21:49:49.501989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:35da833d cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.502003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.386 [2024-09-30 21:49:49.502060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.502075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:05.386 #45 NEW cov: 12508 ft: 15905 corp: 33/380b lim: 30 exec/s: 45 rss: 74Mb L: 18/27 MS: 1 InsertRepeatedBytes- 00:06:05.386 [2024-09-30 21:49:49.561568] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (14340) > buf size (4096) 00:06:05.386 [2024-09-30 21:49:49.561687] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:06:05.386 [2024-09-30 21:49:49.561918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0e000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.561944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.386 [2024-09-30 21:49:49.562001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff83c8 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.562016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:05.386 #46 NEW cov: 12508 ft: 15922 corp: 34/397b lim: 30 exec/s: 46 rss: 74Mb L: 17/27 MS: 1 ChangeByte- 00:06:05.386 [2024-09-30 21:49:49.621688] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2000035da 00:06:05.386 [2024-09-30 21:49:49.621942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0afb02bc cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.621977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.386 #47 NEW cov: 12508 ft: 15942 corp: 35/406b lim: 30 exec/s: 47 rss: 74Mb L: 9/27 MS: 1 PersAutoDict- DE: "\373\274\0025\332=f\000"- 00:06:05.386 [2024-09-30 21:49:49.681883] ctrlr.c:2698:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (1792) > len (4) 00:06:05.386 [2024-09-30 21:49:49.682099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.386 [2024-09-30 21:49:49.682125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:05.386 #48 NEW cov: 12508 ft: 15949 corp: 36/417b lim: 30 exec/s: 24 rss: 74Mb L: 11/27 MS: 1 CMP- DE: "\007\000"- 00:06:05.386 #48 DONE cov: 12508 ft: 15949 corp: 36/417b lim: 30 exec/s: 24 rss: 74Mb 00:06:05.386 ###### Recommended dictionary. ###### 00:06:05.386 "\373\274\0025\332=f\000" # Uses: 3 00:06:05.386 "\007\000" # Uses: 0 00:06:05.386 ###### End of recommended dictionary. ###### 00:06:05.386 Done 48 runs in 2 second(s) 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:05.645 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:05.646 21:49:49 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:06:05.646 [2024-09-30 21:49:49.894015] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:05.646 [2024-09-30 21:49:49.894101] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045224 ] 00:06:05.904 [2024-09-30 21:49:50.082630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.905 [2024-09-30 21:49:50.155458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.905 [2024-09-30 21:49:50.214965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.905 [2024-09-30 21:49:50.231348] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:06:05.905 INFO: Running with entropic power schedule (0xFF, 100). 00:06:05.905 INFO: Seed: 3439146078 00:06:06.163 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:06.163 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:06.163 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:06:06.163 INFO: A corpus is not provided, starting from an empty corpus 00:06:06.163 #2 INITED exec/s: 0 rss: 66Mb 00:06:06.163 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:06.163 This may also happen if the target rejected all inputs we tried so far 00:06:06.163 [2024-09-30 21:49:50.286592] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.163 [2024-09-30 21:49:50.286719] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.163 [2024-09-30 21:49:50.286834] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.163 [2024-09-30 21:49:50.286948] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.163 [2024-09-30 21:49:50.287168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.163 [2024-09-30 21:49:50.287202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.163 [2024-09-30 21:49:50.287260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.163 [2024-09-30 21:49:50.287277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.163 [2024-09-30 21:49:50.287336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.163 [2024-09-30 21:49:50.287353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.163 [2024-09-30 21:49:50.287409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.163 [2024-09-30 21:49:50.287425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.422 NEW_FUNC[1/713]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:06:06.422 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:06.422 #33 NEW cov: 12185 ft: 12187 corp: 2/30b lim: 35 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:06:06.422 [2024-09-30 21:49:50.628230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6666000a cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.422 [2024-09-30 21:49:50.628291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.422 [2024-09-30 21:49:50.628384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.422 [2024-09-30 21:49:50.628411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.422 [2024-09-30 21:49:50.628493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.422 [2024-09-30 21:49:50.628521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.422 [2024-09-30 21:49:50.628600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.422 [2024-09-30 21:49:50.628626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.422 [2024-09-30 21:49:50.628703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.422 [2024-09-30 21:49:50.628729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.422 NEW_FUNC[1/1]: 0x14a2288 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:734 00:06:06.422 #35 NEW cov: 12315 ft: 13010 corp: 3/65b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:06.423 [2024-09-30 21:49:50.677895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.677923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.677976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.677997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.678046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.678059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.678112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.678125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.423 #39 NEW cov: 12321 ft: 13311 corp: 4/95b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 4 CopyPart-CopyPart-InsertByte-InsertRepeatedBytes- 00:06:06.423 [2024-09-30 21:49:50.717880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.717906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.717960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.717974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.718024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:00002a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.718038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.423 #40 NEW cov: 12406 ft: 14001 corp: 5/117b lim: 35 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 CrossOver- 00:06:06.423 [2024-09-30 21:49:50.778205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2828000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.778234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.778286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.778302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.778359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.778373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.423 [2024-09-30 21:49:50.778426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.423 [2024-09-30 21:49:50.778439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.682 #41 NEW cov: 12406 ft: 14182 corp: 6/149b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:06:06.682 [2024-09-30 21:49:50.818019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2828000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.818045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.818099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.818115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.682 #42 NEW cov: 12406 ft: 14461 corp: 7/169b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:06:06.682 [2024-09-30 21:49:50.878347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.878371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.878441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.878455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.878515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:712a002a cdw11:00002a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.878529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.682 #43 NEW cov: 12406 ft: 14511 corp: 8/191b lim: 35 exec/s: 0 rss: 74Mb L: 22/35 MS: 1 ChangeByte- 00:06:06.682 [2024-09-30 21:49:50.938742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6666000a cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.938767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.938820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.938837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.938886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.938902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.938953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.938966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.939016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.939028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:06.682 #44 NEW cov: 12406 ft: 14528 corp: 9/226b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:06:06.682 [2024-09-30 21:49:50.998517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2828000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.998542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.682 [2024-09-30 21:49:50.998595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.682 [2024-09-30 21:49:50.998608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.682 #45 NEW cov: 12406 ft: 14566 corp: 10/246b lim: 35 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 ShuffleBytes- 00:06:06.943 [2024-09-30 21:49:51.058937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:ff002aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.058961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.059014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.059029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.059080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.059093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.059145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.059158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.943 #46 NEW cov: 12406 ft: 14625 corp: 11/279b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:06.943 [2024-09-30 21:49:51.098561] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.098683] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.098791] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.098899] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.099112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.099140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.099194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.099214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.099267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.099283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.099330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.099345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.943 #47 NEW cov: 12406 ft: 14642 corp: 12/308b lim: 35 exec/s: 0 rss: 74Mb L: 29/35 MS: 1 ShuffleBytes- 00:06:06.943 [2024-09-30 21:49:51.158757] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.158898] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.159013] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.159124] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.943 [2024-09-30 21:49:51.159337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.159364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.159419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.159435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.159483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:5b000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.159498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.159552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.159568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.943 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:06.943 #48 NEW cov: 12429 ft: 14725 corp: 13/338b lim: 35 exec/s: 0 rss: 74Mb L: 30/35 MS: 1 InsertByte- 00:06:06.943 [2024-09-30 21:49:51.199317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.199342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.199394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.199408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.943 [2024-09-30 21:49:51.199457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.943 [2024-09-30 21:49:51.199473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.199522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.199535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.944 #49 NEW cov: 12429 ft: 14796 corp: 14/371b lim: 35 exec/s: 0 rss: 74Mb L: 33/35 MS: 1 ShuffleBytes- 00:06:06.944 [2024-09-30 21:49:51.259021] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.944 [2024-09-30 21:49:51.259138] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.944 [2024-09-30 21:49:51.259247] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.944 [2024-09-30 21:49:51.259362] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.944 [2024-09-30 21:49:51.259562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.259588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.259642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.259659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.259711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.259727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.259778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.259794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:06.944 #50 NEW cov: 12429 ft: 14816 corp: 15/400b lim: 35 exec/s: 50 rss: 74Mb L: 29/35 MS: 1 ChangeByte- 00:06:06.944 [2024-09-30 21:49:51.299161] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:06.944 [2024-09-30 21:49:51.299657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2a00002a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.299684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.299734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.299748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.299799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.299812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:06.944 [2024-09-30 21:49:51.299863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:06.944 [2024-09-30 21:49:51.299877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.203 #51 NEW cov: 12429 ft: 14880 corp: 16/434b lim: 35 exec/s: 51 rss: 74Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:06:07.203 [2024-09-30 21:49:51.339614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.203 [2024-09-30 21:49:51.339638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.339692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.339708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.339759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:000029fd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.339772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.204 #52 NEW cov: 12429 ft: 14925 corp: 17/456b lim: 35 exec/s: 52 rss: 74Mb L: 22/35 MS: 1 ChangeBinInt- 00:06:07.204 [2024-09-30 21:49:51.379467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ce870028 cdw11:87008787 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.379491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.204 #57 NEW cov: 12429 ft: 15242 corp: 18/463b lim: 35 exec/s: 57 rss: 74Mb L: 7/35 MS: 5 CrossOver-InsertByte-ChangeBinInt-ChangeByte-InsertRepeatedBytes- 00:06:07.204 [2024-09-30 21:49:51.419941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2828000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.419965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.420018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:280028d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.420032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.420082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.420095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.420163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.420177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.204 #58 NEW cov: 12429 ft: 15296 corp: 19/496b lim: 35 exec/s: 58 rss: 74Mb L: 33/35 MS: 1 InsertByte- 00:06:07.204 [2024-09-30 21:49:51.459928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.459952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.460003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.460018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.460070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a71002a cdw11:00002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.460084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.204 #59 NEW cov: 12429 ft: 15331 corp: 20/519b lim: 35 exec/s: 59 rss: 74Mb L: 23/35 MS: 1 InsertByte- 00:06:07.204 [2024-09-30 21:49:51.520246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:ff002aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.520270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.520326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.520340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.520393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.520406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.520459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.520473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.204 #60 NEW cov: 12429 ft: 15341 corp: 21/552b lim: 35 exec/s: 60 rss: 74Mb L: 33/35 MS: 1 ShuffleBytes- 00:06:07.204 [2024-09-30 21:49:51.560088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2828000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.560112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.204 [2024-09-30 21:49:51.560166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28870028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.204 [2024-09-30 21:49:51.560180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.464 #66 NEW cov: 12429 ft: 15364 corp: 22/572b lim: 35 exec/s: 66 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:06:07.464 [2024-09-30 21:49:51.620644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6666000a cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.620668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.620723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.620736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.620787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.620800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.620850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.620863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.620914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.620927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.464 #67 NEW cov: 12429 ft: 15375 corp: 23/607b lim: 35 exec/s: 67 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:06:07.464 [2024-09-30 21:49:51.660215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ce870028 cdw11:87008786 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.660240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.464 #68 NEW cov: 12429 ft: 15413 corp: 24/614b lim: 35 exec/s: 68 rss: 74Mb L: 7/35 MS: 1 ChangeBit- 00:06:07.464 [2024-09-30 21:49:51.720782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:ff002aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.720807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.720860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.720878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.720927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.720940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.720992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.721005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.464 #69 NEW cov: 12429 ft: 15426 corp: 25/647b lim: 35 exec/s: 69 rss: 75Mb L: 33/35 MS: 1 ChangeBit- 00:06:07.464 [2024-09-30 21:49:51.780413] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.464 [2024-09-30 21:49:51.780732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2a00002a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.780758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.464 [2024-09-30 21:49:51.780812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.464 [2024-09-30 21:49:51.780826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.464 #70 NEW cov: 12429 ft: 15437 corp: 26/664b lim: 35 exec/s: 70 rss: 75Mb L: 17/35 MS: 1 EraseBytes- 00:06:07.723 [2024-09-30 21:49:51.840582] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.723 [2024-09-30 21:49:51.841072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:2a00002a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.723 [2024-09-30 21:49:51.841100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.723 [2024-09-30 21:49:51.841153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.723 [2024-09-30 21:49:51.841167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.723 [2024-09-30 21:49:51.841218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.723 [2024-09-30 21:49:51.841232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.723 [2024-09-30 21:49:51.841289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.723 [2024-09-30 21:49:51.841302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.724 #71 NEW cov: 12429 ft: 15493 corp: 27/695b lim: 35 exec/s: 71 rss: 75Mb L: 31/35 MS: 1 EraseBytes- 00:06:07.724 [2024-09-30 21:49:51.880826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ce870028 cdw11:870087e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.880851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.724 #72 NEW cov: 12429 ft: 15518 corp: 28/702b lim: 35 exec/s: 72 rss: 75Mb L: 7/35 MS: 1 ChangeByte- 00:06:07.724 [2024-09-30 21:49:51.921158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.921183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:51.921238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.921251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:51.921301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:000029fd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.921319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.724 #73 NEW cov: 12429 ft: 15625 corp: 29/724b lim: 35 exec/s: 73 rss: 75Mb L: 22/35 MS: 1 ShuffleBytes- 00:06:07.724 [2024-09-30 21:49:51.981589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:6666000a cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.981614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:51.981668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.981682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:51.981734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:3c660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.981747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:51.981797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.981810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:51.981860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:66660066 cdw11:66006666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:51.981873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:07.724 #74 NEW cov: 12429 ft: 15637 corp: 30/759b lim: 35 exec/s: 74 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:06:07.724 [2024-09-30 21:49:52.041243] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.724 [2024-09-30 21:49:52.041646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:00002a01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:52.041674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:52.041729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:2a000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:52.041747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:52.041799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:52.041813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.724 [2024-09-30 21:49:52.041868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:2a2a002a cdw11:fd002a29 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.724 [2024-09-30 21:49:52.041881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.724 #75 NEW cov: 12429 ft: 15644 corp: 31/789b lim: 35 exec/s: 75 rss: 75Mb L: 30/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:07.984 [2024-09-30 21:49:52.101411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ce870020 cdw11:870087e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.101436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.984 #76 NEW cov: 12429 ft: 15655 corp: 32/796b lim: 35 exec/s: 76 rss: 75Mb L: 7/35 MS: 1 ChangeBit- 00:06:07.984 [2024-09-30 21:49:52.161861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:2a2a002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.161886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.984 [2024-09-30 21:49:52.161941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:2a2e002a cdw11:2a002a2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.161955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.984 [2024-09-30 21:49:52.162006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2a2a002a cdw11:000029fd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.162019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.984 #77 NEW cov: 12429 ft: 15660 corp: 33/818b lim: 35 exec/s: 77 rss: 75Mb L: 22/35 MS: 1 ChangeByte- 00:06:07.984 [2024-09-30 21:49:52.201613] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.984 [2024-09-30 21:49:52.201730] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.984 [2024-09-30 21:49:52.201837] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.984 [2024-09-30 21:49:52.201944] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.984 [2024-09-30 21:49:52.202153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.202180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.984 [2024-09-30 21:49:52.202238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.202254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:07.984 [2024-09-30 21:49:52.202314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.202329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:07.984 [2024-09-30 21:49:52.202380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.202395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:07.984 #83 NEW cov: 12429 ft: 15682 corp: 34/851b lim: 35 exec/s: 83 rss: 75Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:06:07.984 [2024-09-30 21:49:52.261873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ce870020 cdw11:e7008787 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:07.984 [2024-09-30 21:49:52.261898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:07.984 #84 NEW cov: 12429 ft: 15689 corp: 35/862b lim: 35 exec/s: 42 rss: 75Mb L: 11/35 MS: 1 CopyPart- 00:06:07.984 #84 DONE cov: 12429 ft: 15689 corp: 35/862b lim: 35 exec/s: 42 rss: 75Mb 00:06:07.984 ###### Recommended dictionary. ###### 00:06:07.984 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:07.984 ###### End of recommended dictionary. ###### 00:06:07.984 Done 84 runs in 2 second(s) 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:08.244 21:49:52 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:06:08.244 [2024-09-30 21:49:52.477678] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:08.244 [2024-09-30 21:49:52.477770] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045758 ] 00:06:08.503 [2024-09-30 21:49:52.652834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.503 [2024-09-30 21:49:52.719418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.503 [2024-09-30 21:49:52.778245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.503 [2024-09-30 21:49:52.794625] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:06:08.503 INFO: Running with entropic power schedule (0xFF, 100). 00:06:08.503 INFO: Seed: 1708170424 00:06:08.503 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:08.503 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:08.503 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:06:08.503 INFO: A corpus is not provided, starting from an empty corpus 00:06:08.503 #2 INITED exec/s: 0 rss: 65Mb 00:06:08.503 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:08.503 This may also happen if the target rejected all inputs we tried so far 00:06:09.022 NEW_FUNC[1/703]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:06:09.022 NEW_FUNC[2/703]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:09.022 #11 NEW cov: 12067 ft: 12056 corp: 2/5b lim: 20 exec/s: 0 rss: 73Mb L: 4/4 MS: 4 InsertByte-CrossOver-CrossOver-CrossOver- 00:06:09.022 #12 NEW cov: 12198 ft: 12667 corp: 3/9b lim: 20 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:06:09.022 #13 NEW cov: 12204 ft: 13018 corp: 4/13b lim: 20 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:06:09.022 #14 NEW cov: 12289 ft: 13378 corp: 5/17b lim: 20 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:06:09.022 #15 NEW cov: 12289 ft: 13408 corp: 6/21b lim: 20 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ShuffleBytes- 00:06:09.281 #16 NEW cov: 12289 ft: 13515 corp: 7/26b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:09.281 #17 NEW cov: 12289 ft: 13554 corp: 8/30b lim: 20 exec/s: 0 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:06:09.281 #18 NEW cov: 12289 ft: 13619 corp: 9/35b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:09.281 #19 NEW cov: 12289 ft: 13640 corp: 10/40b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CrossOver- 00:06:09.281 [2024-09-30 21:49:53.592265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:09.281 [2024-09-30 21:49:53.592303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:09.281 NEW_FUNC[1/20]: 0x132e338 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3477 00:06:09.281 NEW_FUNC[2/20]: 0x132eeb8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3419 00:06:09.281 #20 NEW cov: 12634 ft: 14499 corp: 11/54b lim: 20 exec/s: 0 rss: 74Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:06:09.539 #26 NEW cov: 12634 ft: 14533 corp: 12/58b lim: 20 exec/s: 0 rss: 74Mb L: 4/14 MS: 1 ShuffleBytes- 00:06:09.539 #27 NEW cov: 12634 ft: 14553 corp: 13/63b lim: 20 exec/s: 0 rss: 74Mb L: 5/14 MS: 1 CrossOver- 00:06:09.539 #29 NEW cov: 12634 ft: 14575 corp: 14/67b lim: 20 exec/s: 0 rss: 74Mb L: 4/14 MS: 2 EraseBytes-InsertByte- 00:06:09.539 #30 NEW cov: 12634 ft: 14663 corp: 15/74b lim: 20 exec/s: 0 rss: 74Mb L: 7/14 MS: 1 CrossOver- 00:06:09.539 #31 NEW cov: 12634 ft: 14692 corp: 16/80b lim: 20 exec/s: 31 rss: 74Mb L: 6/14 MS: 1 InsertByte- 00:06:09.539 #32 NEW cov: 12634 ft: 14722 corp: 17/85b lim: 20 exec/s: 32 rss: 74Mb L: 5/14 MS: 1 ShuffleBytes- 00:06:09.798 #33 NEW cov: 12634 ft: 14780 corp: 18/91b lim: 20 exec/s: 33 rss: 74Mb L: 6/14 MS: 1 CrossOver- 00:06:09.798 #34 NEW cov: 12634 ft: 14794 corp: 19/96b lim: 20 exec/s: 34 rss: 74Mb L: 5/14 MS: 1 ShuffleBytes- 00:06:09.798 #35 NEW cov: 12634 ft: 14815 corp: 20/101b lim: 20 exec/s: 35 rss: 74Mb L: 5/14 MS: 1 ChangeByte- 00:06:09.798 #36 NEW cov: 12635 ft: 15041 corp: 21/112b lim: 20 exec/s: 36 rss: 74Mb L: 11/14 MS: 1 InsertRepeatedBytes- 00:06:09.798 #37 NEW cov: 12635 ft: 15062 corp: 22/116b lim: 20 exec/s: 37 rss: 74Mb L: 4/14 MS: 1 ChangeByte- 00:06:10.058 #38 NEW cov: 12635 ft: 15091 corp: 23/120b lim: 20 exec/s: 38 rss: 74Mb L: 4/14 MS: 1 ChangeBit- 00:06:10.058 #39 NEW cov: 12635 ft: 15103 corp: 24/125b lim: 20 exec/s: 39 rss: 74Mb L: 5/14 MS: 1 ChangeBinInt- 00:06:10.058 #40 NEW cov: 12635 ft: 15140 corp: 25/129b lim: 20 exec/s: 40 rss: 74Mb L: 4/14 MS: 1 ShuffleBytes- 00:06:10.058 [2024-09-30 21:49:54.304322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:10.058 [2024-09-30 21:49:54.304351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:10.058 #41 NEW cov: 12652 ft: 15333 corp: 26/146b lim: 20 exec/s: 41 rss: 74Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:06:10.058 #42 NEW cov: 12652 ft: 15441 corp: 27/150b lim: 20 exec/s: 42 rss: 74Mb L: 4/17 MS: 1 ShuffleBytes- 00:06:10.058 #43 NEW cov: 12652 ft: 15448 corp: 28/155b lim: 20 exec/s: 43 rss: 74Mb L: 5/17 MS: 1 ShuffleBytes- 00:06:10.317 #44 NEW cov: 12652 ft: 15456 corp: 29/160b lim: 20 exec/s: 44 rss: 74Mb L: 5/17 MS: 1 ShuffleBytes- 00:06:10.317 #45 NEW cov: 12652 ft: 15467 corp: 30/165b lim: 20 exec/s: 45 rss: 74Mb L: 5/17 MS: 1 ChangeByte- 00:06:10.317 #46 NEW cov: 12652 ft: 15486 corp: 31/170b lim: 20 exec/s: 46 rss: 74Mb L: 5/17 MS: 1 ShuffleBytes- 00:06:10.317 #47 NEW cov: 12652 ft: 15549 corp: 32/189b lim: 20 exec/s: 47 rss: 74Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:06:10.317 #48 NEW cov: 12652 ft: 15554 corp: 33/194b lim: 20 exec/s: 48 rss: 74Mb L: 5/19 MS: 1 InsertByte- 00:06:10.576 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:10.576 #49 NEW cov: 12675 ft: 15580 corp: 34/199b lim: 20 exec/s: 49 rss: 75Mb L: 5/19 MS: 1 ShuffleBytes- 00:06:10.576 #51 NEW cov: 12675 ft: 15595 corp: 35/204b lim: 20 exec/s: 51 rss: 75Mb L: 5/19 MS: 2 CrossOver-ShuffleBytes- 00:06:10.576 #52 NEW cov: 12675 ft: 15599 corp: 36/210b lim: 20 exec/s: 26 rss: 75Mb L: 6/19 MS: 1 ChangeBinInt- 00:06:10.576 #52 DONE cov: 12675 ft: 15599 corp: 36/210b lim: 20 exec/s: 26 rss: 75Mb 00:06:10.576 Done 52 runs in 2 second(s) 00:06:10.836 21:49:54 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:06:10.836 21:49:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:10.836 21:49:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:10.836 21:49:54 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:10.836 21:49:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:06:10.836 [2024-09-30 21:49:55.046281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:10.836 [2024-09-30 21:49:55.046363] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046121 ] 00:06:11.096 [2024-09-30 21:49:55.225530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.096 [2024-09-30 21:49:55.291763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.096 [2024-09-30 21:49:55.350606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.096 [2024-09-30 21:49:55.366977] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:06:11.096 INFO: Running with entropic power schedule (0xFF, 100). 00:06:11.096 INFO: Seed: 4280174895 00:06:11.096 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:11.096 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:11.096 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:06:11.096 INFO: A corpus is not provided, starting from an empty corpus 00:06:11.096 #2 INITED exec/s: 0 rss: 65Mb 00:06:11.096 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:11.096 This may also happen if the target rejected all inputs we tried so far 00:06:11.096 [2024-09-30 21:49:55.422762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.096 [2024-09-30 21:49:55.422790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.096 [2024-09-30 21:49:55.422844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.096 [2024-09-30 21:49:55.422859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.096 [2024-09-30 21:49:55.422909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.096 [2024-09-30 21:49:55.422922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.096 [2024-09-30 21:49:55.422971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.096 [2024-09-30 21:49:55.422985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.355 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:06:11.355 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:11.355 #16 NEW cov: 12212 ft: 12207 corp: 2/33b lim: 35 exec/s: 0 rss: 73Mb L: 32/32 MS: 4 ChangeBit-ChangeBit-CrossOver-InsertRepeatedBytes- 00:06:11.615 [2024-09-30 21:49:55.743816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.743850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.743911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.743925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.743987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.744001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.744060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.744074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.615 #17 NEW cov: 12325 ft: 12732 corp: 3/66b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertByte- 00:06:11.615 [2024-09-30 21:49:55.803880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.803907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.803965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.803982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.804042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.804056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.804115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.804129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.615 #23 NEW cov: 12331 ft: 13040 corp: 4/98b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 ChangeByte- 00:06:11.615 [2024-09-30 21:49:55.843981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.844007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.844065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.844080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.844136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.844150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.844205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.844219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.615 #24 NEW cov: 12416 ft: 13311 corp: 5/131b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 CopyPart- 00:06:11.615 [2024-09-30 21:49:55.904134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:54e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.904160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.904223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.904236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.904293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.904312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.904368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.904382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.615 #25 NEW cov: 12416 ft: 13447 corp: 6/164b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:06:11.615 [2024-09-30 21:49:55.944247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.944272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.944332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.944347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.944403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.944417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.615 [2024-09-30 21:49:55.944475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.615 [2024-09-30 21:49:55.944489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.875 #26 NEW cov: 12416 ft: 13580 corp: 7/196b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 ChangeBit- 00:06:11.876 [2024-09-30 21:49:56.004459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:16161216 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.004485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.004537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.004552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.004608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.004622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.004676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.004691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.876 #27 NEW cov: 12416 ft: 13712 corp: 8/228b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 ChangeBinInt- 00:06:11.876 [2024-09-30 21:49:56.064585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:16161216 cdw11:3b3b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.064611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.064670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.064684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.064741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.064754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.064810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.064823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.876 #28 NEW cov: 12416 ft: 13744 corp: 9/260b lim: 35 exec/s: 0 rss: 73Mb L: 32/33 MS: 1 ChangeByte- 00:06:11.876 [2024-09-30 21:49:56.124774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:16161216 cdw11:3b3b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.124800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.124860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.124874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.124930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.124944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.124998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.125012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.876 #29 NEW cov: 12416 ft: 13805 corp: 10/292b lim: 35 exec/s: 0 rss: 74Mb L: 32/33 MS: 1 ShuffleBytes- 00:06:11.876 [2024-09-30 21:49:56.184949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.184974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.185032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.185047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.185104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.185117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.185175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.185189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:11.876 #35 NEW cov: 12416 ft: 13844 corp: 11/326b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 CrossOver- 00:06:11.876 [2024-09-30 21:49:56.225047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.225072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.225146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e90a cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.225160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.225218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.225231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:11.876 [2024-09-30 21:49:56.225286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:11.876 [2024-09-30 21:49:56.225300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.136 #36 NEW cov: 12416 ft: 13890 corp: 12/359b lim: 35 exec/s: 0 rss: 74Mb L: 33/34 MS: 1 CrossOver- 00:06:12.136 [2024-09-30 21:49:56.264815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:13131313 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.264840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.264897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:13131313 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.264910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.136 #37 NEW cov: 12416 ft: 14351 corp: 13/376b lim: 35 exec/s: 0 rss: 74Mb L: 17/34 MS: 1 InsertRepeatedBytes- 00:06:12.136 [2024-09-30 21:49:56.305323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:54e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.305349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.305408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.305422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.305479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9aee9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.305493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.305552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.305566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.136 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:12.136 #38 NEW cov: 12439 ft: 14429 corp: 14/409b lim: 35 exec/s: 0 rss: 74Mb L: 33/34 MS: 1 ChangeByte- 00:06:12.136 [2024-09-30 21:49:56.365478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.365504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.365563] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e90a cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.365576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.365634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e926 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.365648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.365703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.365717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.136 #39 NEW cov: 12439 ft: 14472 corp: 15/443b lim: 35 exec/s: 39 rss: 74Mb L: 34/34 MS: 1 InsertByte- 00:06:12.136 [2024-09-30 21:49:56.425668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.425695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.425754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.425767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.425824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.425838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.425895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.425909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.136 #40 NEW cov: 12439 ft: 14482 corp: 16/475b lim: 35 exec/s: 40 rss: 74Mb L: 32/34 MS: 1 ShuffleBytes- 00:06:12.136 [2024-09-30 21:49:56.465767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:54e9e9e9 cdw11:e9210000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.465793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.465852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e90000 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.465867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.465923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9aee9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.136 [2024-09-30 21:49:56.465937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.136 [2024-09-30 21:49:56.466000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.137 [2024-09-30 21:49:56.466014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.396 #41 NEW cov: 12439 ft: 14539 corp: 17/508b lim: 35 exec/s: 41 rss: 74Mb L: 33/34 MS: 1 ChangeBinInt- 00:06:12.396 [2024-09-30 21:49:56.525923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.396 [2024-09-30 21:49:56.525949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.396 [2024-09-30 21:49:56.526003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.526017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.526090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.526110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.526185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.526206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.397 #42 NEW cov: 12439 ft: 14635 corp: 18/542b lim: 35 exec/s: 42 rss: 74Mb L: 34/34 MS: 1 CopyPart- 00:06:12.397 [2024-09-30 21:49:56.586103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.586130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.586191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.586206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.586261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.586275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.586335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.586349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.397 #43 NEW cov: 12439 ft: 14690 corp: 19/570b lim: 35 exec/s: 43 rss: 74Mb L: 28/34 MS: 1 EraseBytes- 00:06:12.397 [2024-09-30 21:49:56.626223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.626249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.626313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.626329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.626392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.626406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.626466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.626480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.397 #44 NEW cov: 12439 ft: 14725 corp: 20/603b lim: 35 exec/s: 44 rss: 74Mb L: 33/34 MS: 1 CopyPart- 00:06:12.397 [2024-09-30 21:49:56.666172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:13131313 cdw11:13130003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.666198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.666257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f513ffff cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.666271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.666333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:13131313 cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.666347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.397 #45 NEW cov: 12439 ft: 14976 corp: 21/624b lim: 35 exec/s: 45 rss: 74Mb L: 21/34 MS: 1 CMP- DE: "\376\377\377\365"- 00:06:12.397 [2024-09-30 21:49:56.726500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.726527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.726586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.726600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.726657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.726671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.397 [2024-09-30 21:49:56.726729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.397 [2024-09-30 21:49:56.726743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.657 #46 NEW cov: 12439 ft: 15017 corp: 22/656b lim: 35 exec/s: 46 rss: 74Mb L: 32/34 MS: 1 ShuffleBytes- 00:06:12.657 [2024-09-30 21:49:56.786717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e20003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.786744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.786800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.786814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.786891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.786906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.786966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.786981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.657 #47 NEW cov: 12439 ft: 15032 corp: 23/689b lim: 35 exec/s: 47 rss: 74Mb L: 33/34 MS: 1 ChangeBinInt- 00:06:12.657 [2024-09-30 21:49:56.826777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.826804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.826864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.826878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.826936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.826951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.827009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.827023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.657 #48 NEW cov: 12439 ft: 15034 corp: 24/722b lim: 35 exec/s: 48 rss: 74Mb L: 33/34 MS: 1 ChangeBit- 00:06:12.657 [2024-09-30 21:49:56.867086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.867112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.867171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e90a cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.867185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.867243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e926 cdw11:e9e90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.867257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.867317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.867331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.867390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.867403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.657 #49 NEW cov: 12439 ft: 15107 corp: 25/757b lim: 35 exec/s: 49 rss: 74Mb L: 35/35 MS: 1 InsertByte- 00:06:12.657 [2024-09-30 21:49:56.927043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.927070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.927128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.927142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.927197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.927211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.927268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.927282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.657 #50 NEW cov: 12439 ft: 15116 corp: 26/791b lim: 35 exec/s: 50 rss: 75Mb L: 34/35 MS: 1 CrossOver- 00:06:12.657 [2024-09-30 21:49:56.987028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:13131313 cdw11:13130003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.987054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.987115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:f513ffff cdw11:13130003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.657 [2024-09-30 21:49:56.987129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.657 [2024-09-30 21:49:56.987183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:f513ffff cdw11:13130000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.658 [2024-09-30 21:49:56.987198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.917 #51 NEW cov: 12439 ft: 15131 corp: 27/812b lim: 35 exec/s: 51 rss: 75Mb L: 21/35 MS: 1 PersAutoDict- DE: "\376\377\377\365"- 00:06:12.918 [2024-09-30 21:49:57.047369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.047395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.047451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9220a cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.047466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.047539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e926 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.047561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.047635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.047657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.918 #52 NEW cov: 12439 ft: 15133 corp: 28/846b lim: 35 exec/s: 52 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:06:12.918 [2024-09-30 21:49:57.087484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:16161216 cdw11:3b3b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.087509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.087570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9a9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.087584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.087640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.087654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.087712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.087725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.918 #53 NEW cov: 12439 ft: 15172 corp: 29/878b lim: 35 exec/s: 53 rss: 75Mb L: 32/35 MS: 1 ChangeBit- 00:06:12.918 [2024-09-30 21:49:57.127619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.127645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.127701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.127714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.127769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.127783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.127839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:ede90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.127852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.918 #54 NEW cov: 12439 ft: 15204 corp: 30/911b lim: 35 exec/s: 54 rss: 75Mb L: 33/35 MS: 1 ChangeBit- 00:06:12.918 [2024-09-30 21:49:57.167714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:16161216 cdw11:3b3b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.167740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.167795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.167809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.167865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.167879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.167938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:c9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.167952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.918 #55 NEW cov: 12439 ft: 15260 corp: 31/943b lim: 35 exec/s: 55 rss: 75Mb L: 32/35 MS: 1 ChangeBit- 00:06:12.918 [2024-09-30 21:49:57.228076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:54e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.228101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.228157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.228170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.228223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.228236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.228290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.228303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.228363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:e9e9e9e9 cdw11:e9740000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.228376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:12.918 #56 NEW cov: 12439 ft: 15266 corp: 32/978b lim: 35 exec/s: 56 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:06:12.918 [2024-09-30 21:49:57.268206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.268232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.268288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e90a cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.268301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.268363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e926 cdw11:e9e90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.268377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.268432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.268446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:12.918 [2024-09-30 21:49:57.268501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:12.918 [2024-09-30 21:49:57.268515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:13.179 #57 NEW cov: 12439 ft: 15293 corp: 33/1013b lim: 35 exec/s: 57 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:13.179 [2024-09-30 21:49:57.328192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:3be90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.328217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.328268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e90a cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.328286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.328363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e8 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.328386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.328457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.328479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.179 #58 NEW cov: 12439 ft: 15298 corp: 34/1046b lim: 35 exec/s: 58 rss: 75Mb L: 33/35 MS: 1 ChangeBit- 00:06:13.179 [2024-09-30 21:49:57.368261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.368287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.368350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.368365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.368420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.368435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.368491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.368504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.179 #59 NEW cov: 12439 ft: 15326 corp: 35/1079b lim: 35 exec/s: 59 rss: 75Mb L: 33/35 MS: 1 InsertByte- 00:06:13.179 [2024-09-30 21:49:57.408388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.408414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.408473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.408488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.408541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e9e9e9e9 cdw11:e9e90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.408555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:13.179 [2024-09-30 21:49:57.408608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:e9e9e9e9 cdw11:2ae90003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.179 [2024-09-30 21:49:57.408625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:13.179 #60 NEW cov: 12439 ft: 15328 corp: 36/1113b lim: 35 exec/s: 30 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:06:13.179 #60 DONE cov: 12439 ft: 15328 corp: 36/1113b lim: 35 exec/s: 30 rss: 75Mb 00:06:13.179 ###### Recommended dictionary. ###### 00:06:13.179 "\376\377\377\365" # Uses: 1 00:06:13.179 ###### End of recommended dictionary. ###### 00:06:13.179 Done 60 runs in 2 second(s) 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:13.438 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:13.439 21:49:57 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:06:13.439 [2024-09-30 21:49:57.619269] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:13.439 [2024-09-30 21:49:57.619358] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046579 ] 00:06:13.439 [2024-09-30 21:49:57.796077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.698 [2024-09-30 21:49:57.862568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.698 [2024-09-30 21:49:57.921314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.698 [2024-09-30 21:49:57.937687] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:06:13.698 INFO: Running with entropic power schedule (0xFF, 100). 00:06:13.698 INFO: Seed: 2556217468 00:06:13.698 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:13.698 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:13.698 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:06:13.698 INFO: A corpus is not provided, starting from an empty corpus 00:06:13.698 #2 INITED exec/s: 0 rss: 65Mb 00:06:13.698 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:13.698 This may also happen if the target rejected all inputs we tried so far 00:06:13.698 [2024-09-30 21:49:58.003092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:13.698 [2024-09-30 21:49:58.003122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:13.957 NEW_FUNC[1/715]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:06:13.957 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:13.957 #9 NEW cov: 12205 ft: 12196 corp: 2/15b lim: 45 exec/s: 0 rss: 73Mb L: 14/14 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:14.216 [2024-09-30 21:49:58.333981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.334024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.216 #10 NEW cov: 12335 ft: 12673 corp: 3/29b lim: 45 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 ChangeBit- 00:06:14.216 [2024-09-30 21:49:58.393998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.394025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.216 #11 NEW cov: 12341 ft: 12834 corp: 4/44b lim: 45 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 CrossOver- 00:06:14.216 [2024-09-30 21:49:58.454135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.454162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.216 #13 NEW cov: 12426 ft: 13303 corp: 5/53b lim: 45 exec/s: 0 rss: 73Mb L: 9/15 MS: 2 ShuffleBytes-CMP- DE: "\001\000\000\000\000\000\000\002"- 00:06:14.216 [2024-09-30 21:49:58.494426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.494452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.216 [2024-09-30 21:49:58.494508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.494521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.216 #14 NEW cov: 12426 ft: 14073 corp: 6/74b lim: 45 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 CrossOver- 00:06:14.216 [2024-09-30 21:49:58.534526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.534552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.216 [2024-09-30 21:49:58.534609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.216 [2024-09-30 21:49:58.534623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.216 #20 NEW cov: 12426 ft: 14185 corp: 7/96b lim: 45 exec/s: 0 rss: 73Mb L: 22/22 MS: 1 InsertByte- 00:06:14.549 [2024-09-30 21:49:58.594539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.549 [2024-09-30 21:49:58.594568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.549 #21 NEW cov: 12426 ft: 14294 corp: 8/106b lim: 45 exec/s: 0 rss: 73Mb L: 10/22 MS: 1 EraseBytes- 00:06:14.549 [2024-09-30 21:49:58.654708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.549 [2024-09-30 21:49:58.654734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.549 #22 NEW cov: 12426 ft: 14332 corp: 9/116b lim: 45 exec/s: 0 rss: 73Mb L: 10/22 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:06:14.549 [2024-09-30 21:49:58.714879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:02000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.549 [2024-09-30 21:49:58.714904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.549 #23 NEW cov: 12426 ft: 14395 corp: 10/128b lim: 45 exec/s: 0 rss: 74Mb L: 12/22 MS: 1 CMP- DE: "\002\000"- 00:06:14.549 [2024-09-30 21:49:58.775211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.549 [2024-09-30 21:49:58.775237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.549 [2024-09-30 21:49:58.775294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.549 [2024-09-30 21:49:58.775314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.549 #24 NEW cov: 12426 ft: 14515 corp: 11/146b lim: 45 exec/s: 0 rss: 74Mb L: 18/22 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:06:14.549 [2024-09-30 21:49:58.815324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.550 [2024-09-30 21:49:58.815350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.550 [2024-09-30 21:49:58.815423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.550 [2024-09-30 21:49:58.815437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.883 #25 NEW cov: 12426 ft: 14533 corp: 12/168b lim: 45 exec/s: 0 rss: 74Mb L: 22/22 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:06:14.883 [2024-09-30 21:49:58.875344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:58.875370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:14.883 #26 NEW cov: 12449 ft: 14607 corp: 13/182b lim: 45 exec/s: 0 rss: 74Mb L: 14/22 MS: 1 ChangeBit- 00:06:14.883 [2024-09-30 21:49:58.915641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:58.915667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 [2024-09-30 21:49:58.915723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:58.915736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.883 #27 NEW cov: 12449 ft: 14677 corp: 14/202b lim: 45 exec/s: 0 rss: 74Mb L: 20/22 MS: 1 CopyPart- 00:06:14.883 [2024-09-30 21:49:58.975623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:7aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:58.975649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 #28 NEW cov: 12449 ft: 14710 corp: 15/217b lim: 45 exec/s: 28 rss: 74Mb L: 15/22 MS: 1 InsertByte- 00:06:14.883 [2024-09-30 21:49:59.015895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.015921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 [2024-09-30 21:49:59.015978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.015992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.883 #29 NEW cov: 12449 ft: 14737 corp: 16/235b lim: 45 exec/s: 29 rss: 74Mb L: 18/22 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:06:14.883 [2024-09-30 21:49:59.056023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.056050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 [2024-09-30 21:49:59.056108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.056121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.883 #30 NEW cov: 12449 ft: 14754 corp: 17/253b lim: 45 exec/s: 30 rss: 74Mb L: 18/22 MS: 1 PersAutoDict- DE: "\002\000"- 00:06:14.883 [2024-09-30 21:49:59.116027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff930a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.116054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 #31 NEW cov: 12449 ft: 14767 corp: 18/268b lim: 45 exec/s: 31 rss: 74Mb L: 15/22 MS: 1 InsertByte- 00:06:14.883 [2024-09-30 21:49:59.156144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.156170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 #34 NEW cov: 12449 ft: 14796 corp: 19/278b lim: 45 exec/s: 34 rss: 74Mb L: 10/22 MS: 3 InsertByte-ChangeBinInt-CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:14.883 [2024-09-30 21:49:59.196438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.196464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:14.883 [2024-09-30 21:49:59.196521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:ff320007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:14.883 [2024-09-30 21:49:59.196535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:14.883 #35 NEW cov: 12449 ft: 14815 corp: 20/300b lim: 45 exec/s: 35 rss: 74Mb L: 22/22 MS: 1 InsertByte- 00:06:15.174 [2024-09-30 21:49:59.236715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00002c0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.236744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 [2024-09-30 21:49:59.236802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.236815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.174 [2024-09-30 21:49:59.236870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.236884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:15.174 #40 NEW cov: 12449 ft: 15088 corp: 21/327b lim: 45 exec/s: 40 rss: 74Mb L: 27/27 MS: 5 ShuffleBytes-ShuffleBytes-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:15.174 [2024-09-30 21:49:59.276489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:40000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.276515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 #41 NEW cov: 12449 ft: 15114 corp: 22/336b lim: 45 exec/s: 41 rss: 74Mb L: 9/27 MS: 1 ChangeByte- 00:06:15.174 [2024-09-30 21:49:59.336628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.336654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 #42 NEW cov: 12449 ft: 15141 corp: 23/352b lim: 45 exec/s: 42 rss: 74Mb L: 16/27 MS: 1 InsertByte- 00:06:15.174 [2024-09-30 21:49:59.376894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.376920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 [2024-09-30 21:49:59.376976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:ff320007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.376989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.174 #43 NEW cov: 12449 ft: 15157 corp: 24/374b lim: 45 exec/s: 43 rss: 74Mb L: 22/27 MS: 1 ChangeBit- 00:06:15.174 [2024-09-30 21:49:59.436933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff7aff cdw11:0aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.436959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 #44 NEW cov: 12449 ft: 15169 corp: 25/389b lim: 45 exec/s: 44 rss: 74Mb L: 15/27 MS: 1 ShuffleBytes- 00:06:15.174 [2024-09-30 21:49:59.497057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:7aff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.497084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 #45 NEW cov: 12449 ft: 15214 corp: 26/405b lim: 45 exec/s: 45 rss: 74Mb L: 16/27 MS: 1 CopyPart- 00:06:15.174 [2024-09-30 21:49:59.537349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.537375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.174 [2024-09-30 21:49:59.537433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.174 [2024-09-30 21:49:59.537451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.433 #46 NEW cov: 12449 ft: 15222 corp: 27/423b lim: 45 exec/s: 46 rss: 74Mb L: 18/27 MS: 1 CMP- DE: "\000\037"- 00:06:15.433 [2024-09-30 21:49:59.597520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.597546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.433 [2024-09-30 21:49:59.597603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00020007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.597616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.433 #47 NEW cov: 12449 ft: 15233 corp: 28/443b lim: 45 exec/s: 47 rss: 74Mb L: 20/27 MS: 1 CopyPart- 00:06:15.433 [2024-09-30 21:49:59.637593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.637619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.433 [2024-09-30 21:49:59.637675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffff0a cdw11:ff320007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.637689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.433 #48 NEW cov: 12449 ft: 15250 corp: 29/465b lim: 45 exec/s: 48 rss: 74Mb L: 22/27 MS: 1 ChangeByte- 00:06:15.433 [2024-09-30 21:49:59.677725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff930a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.677751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.433 [2024-09-30 21:49:59.677806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.677820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.433 #49 NEW cov: 12449 ft: 15257 corp: 30/484b lim: 45 exec/s: 49 rss: 75Mb L: 19/27 MS: 1 CopyPart- 00:06:15.433 [2024-09-30 21:49:59.737924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.737951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.433 [2024-09-30 21:49:59.738007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.738022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.433 #50 NEW cov: 12449 ft: 15263 corp: 31/504b lim: 45 exec/s: 50 rss: 75Mb L: 20/27 MS: 1 ShuffleBytes- 00:06:15.433 [2024-09-30 21:49:59.798117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.798144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.433 [2024-09-30 21:49:59.798200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00020000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.433 [2024-09-30 21:49:59.798214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.693 #51 NEW cov: 12449 ft: 15278 corp: 32/522b lim: 45 exec/s: 51 rss: 75Mb L: 18/27 MS: 1 ChangeByte- 00:06:15.693 [2024-09-30 21:49:59.858295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:1f000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.693 [2024-09-30 21:49:59.858327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.693 [2024-09-30 21:49:59.858381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0affffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.693 [2024-09-30 21:49:59.858395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:15.693 #52 NEW cov: 12449 ft: 15286 corp: 33/544b lim: 45 exec/s: 52 rss: 75Mb L: 22/27 MS: 1 PersAutoDict- DE: "\000\037"- 00:06:15.693 [2024-09-30 21:49:59.918260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffdf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.693 [2024-09-30 21:49:59.918286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.693 #53 NEW cov: 12449 ft: 15294 corp: 34/560b lim: 45 exec/s: 53 rss: 75Mb L: 16/27 MS: 1 ChangeBinInt- 00:06:15.693 [2024-09-30 21:49:59.978437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff930a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:15.693 [2024-09-30 21:49:59.978463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:15.693 #54 NEW cov: 12449 ft: 15366 corp: 35/575b lim: 45 exec/s: 27 rss: 75Mb L: 15/27 MS: 1 ShuffleBytes- 00:06:15.693 #54 DONE cov: 12449 ft: 15366 corp: 35/575b lim: 45 exec/s: 27 rss: 75Mb 00:06:15.693 ###### Recommended dictionary. ###### 00:06:15.693 "\001\000\000\000\000\000\000\002" # Uses: 4 00:06:15.693 "\002\000" # Uses: 1 00:06:15.693 "\001\000\000\000\000\000\000\000" # Uses: 0 00:06:15.693 "\000\037" # Uses: 1 00:06:15.693 ###### End of recommended dictionary. ###### 00:06:15.693 Done 54 runs in 2 second(s) 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:06:15.953 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:15.954 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:15.954 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:15.954 21:50:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:06:15.954 [2024-09-30 21:50:00.174346] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:15.954 [2024-09-30 21:50:00.174418] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047118 ] 00:06:16.213 [2024-09-30 21:50:00.350507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.213 [2024-09-30 21:50:00.419219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.213 [2024-09-30 21:50:00.478845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.213 [2024-09-30 21:50:00.495228] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:06:16.213 INFO: Running with entropic power schedule (0xFF, 100). 00:06:16.213 INFO: Seed: 816244448 00:06:16.213 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:16.213 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:16.213 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:06:16.213 INFO: A corpus is not provided, starting from an empty corpus 00:06:16.213 #2 INITED exec/s: 0 rss: 66Mb 00:06:16.213 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:16.213 This may also happen if the target rejected all inputs we tried so far 00:06:16.213 [2024-09-30 21:50:00.542918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.213 [2024-09-30 21:50:00.542946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.732 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:06:16.732 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:16.732 #5 NEW cov: 12140 ft: 12129 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 3 ChangeBit-ShuffleBytes-InsertByte- 00:06:16.732 [2024-09-30 21:50:00.883856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:00.883889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.732 [2024-09-30 21:50:00.883939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000108 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:00.883952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.732 #6 NEW cov: 12253 ft: 12988 corp: 3/7b lim: 10 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CMP- DE: "\001\010"- 00:06:16.732 [2024-09-30 21:50:00.943863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:00.943889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.732 #7 NEW cov: 12259 ft: 13186 corp: 4/9b lim: 10 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:06:16.732 [2024-09-30 21:50:00.983912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fed5 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:00.983937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.732 #8 NEW cov: 12344 ft: 13453 corp: 5/11b lim: 10 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBinInt- 00:06:16.732 [2024-09-30 21:50:01.024156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:01.024182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.732 [2024-09-30 21:50:01.024234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:01.024247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.732 #9 NEW cov: 12344 ft: 13517 corp: 6/16b lim: 10 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertByte- 00:06:16.732 [2024-09-30 21:50:01.084439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:01.084465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.732 [2024-09-30 21:50:01.084514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:01.084528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.732 [2024-09-30 21:50:01.084577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000801 cdw11:00000000 00:06:16.732 [2024-09-30 21:50:01.084590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.991 #10 NEW cov: 12344 ft: 13801 corp: 7/23b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 PersAutoDict- DE: "\001\010"- 00:06:16.991 [2024-09-30 21:50:01.144413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.991 [2024-09-30 21:50:01.144439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.991 #11 NEW cov: 12344 ft: 13956 corp: 8/26b lim: 10 exec/s: 0 rss: 74Mb L: 3/7 MS: 1 InsertByte- 00:06:16.991 [2024-09-30 21:50:01.204743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000700 cdw11:00000000 00:06:16.991 [2024-09-30 21:50:01.204769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.991 [2024-09-30 21:50:01.204820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:16.991 [2024-09-30 21:50:01.204833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:16.991 [2024-09-30 21:50:01.204883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000801 cdw11:00000000 00:06:16.991 [2024-09-30 21:50:01.204896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:16.991 #12 NEW cov: 12344 ft: 13988 corp: 9/33b lim: 10 exec/s: 0 rss: 74Mb L: 7/7 MS: 1 ChangeBinInt- 00:06:16.991 [2024-09-30 21:50:01.264700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:16.991 [2024-09-30 21:50:01.264726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.991 #13 NEW cov: 12344 ft: 14014 corp: 10/36b lim: 10 exec/s: 0 rss: 74Mb L: 3/7 MS: 1 CrossOver- 00:06:16.991 [2024-09-30 21:50:01.324894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008c1 cdw11:00000000 00:06:16.991 [2024-09-30 21:50:01.324920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:16.991 #14 NEW cov: 12344 ft: 14091 corp: 11/38b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 ChangeByte- 00:06:17.250 [2024-09-30 21:50:01.364978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe23 cdw11:00000000 00:06:17.250 [2024-09-30 21:50:01.365007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.251 #15 NEW cov: 12344 ft: 14132 corp: 12/41b lim: 10 exec/s: 0 rss: 74Mb L: 3/7 MS: 1 InsertByte- 00:06:17.251 [2024-09-30 21:50:01.425162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.425188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.251 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:17.251 #16 NEW cov: 12367 ft: 14203 corp: 13/43b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 ShuffleBytes- 00:06:17.251 [2024-09-30 21:50:01.465404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe23 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.465430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.251 [2024-09-30 21:50:01.465482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d5c0 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.465497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.251 #17 NEW cov: 12367 ft: 14227 corp: 14/47b lim: 10 exec/s: 0 rss: 74Mb L: 4/7 MS: 1 InsertByte- 00:06:17.251 [2024-09-30 21:50:01.525505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000000d6 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.525531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.251 #18 NEW cov: 12367 ft: 14232 corp: 15/49b lim: 10 exec/s: 18 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:06:17.251 [2024-09-30 21:50:01.565886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.565912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.251 [2024-09-30 21:50:01.565965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.565978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.251 [2024-09-30 21:50:01.566029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000800 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.566043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.251 [2024-09-30 21:50:01.566093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.566106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:17.251 #19 NEW cov: 12367 ft: 14483 corp: 16/58b lim: 10 exec/s: 19 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:17.251 [2024-09-30 21:50:01.605890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000108 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.605915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.251 [2024-09-30 21:50:01.605968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.605982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.251 [2024-09-30 21:50:01.606031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000801 cdw11:00000000 00:06:17.251 [2024-09-30 21:50:01.606045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.510 #20 NEW cov: 12367 ft: 14545 corp: 17/65b lim: 10 exec/s: 20 rss: 74Mb L: 7/9 MS: 1 PersAutoDict- DE: "\001\010"- 00:06:17.510 [2024-09-30 21:50:01.666096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:06:17.510 [2024-09-30 21:50:01.666121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.510 [2024-09-30 21:50:01.666173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d6fc cdw11:00000000 00:06:17.510 [2024-09-30 21:50:01.666187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.510 [2024-09-30 21:50:01.666235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d6fc cdw11:00000000 00:06:17.510 [2024-09-30 21:50:01.666249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.510 #21 NEW cov: 12367 ft: 14553 corp: 18/71b lim: 10 exec/s: 21 rss: 74Mb L: 6/9 MS: 1 CopyPart- 00:06:17.510 [2024-09-30 21:50:01.726011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000003e cdw11:00000000 00:06:17.510 [2024-09-30 21:50:01.726037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.510 #22 NEW cov: 12367 ft: 14569 corp: 19/73b lim: 10 exec/s: 22 rss: 74Mb L: 2/9 MS: 1 ChangeByte- 00:06:17.510 [2024-09-30 21:50:01.786182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000d5d5 cdw11:00000000 00:06:17.510 [2024-09-30 21:50:01.786207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.510 #24 NEW cov: 12367 ft: 14628 corp: 20/75b lim: 10 exec/s: 24 rss: 74Mb L: 2/9 MS: 2 EraseBytes-CopyPart- 00:06:17.510 [2024-09-30 21:50:01.826242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f5d5 cdw11:00000000 00:06:17.510 [2024-09-30 21:50:01.826268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.510 #25 NEW cov: 12367 ft: 14668 corp: 21/77b lim: 10 exec/s: 25 rss: 75Mb L: 2/9 MS: 1 ChangeBit- 00:06:17.769 [2024-09-30 21:50:01.886471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000801 cdw11:00000000 00:06:17.769 [2024-09-30 21:50:01.886496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.769 #29 NEW cov: 12367 ft: 14701 corp: 22/80b lim: 10 exec/s: 29 rss: 75Mb L: 3/9 MS: 4 EraseBytes-CopyPart-ShuffleBytes-PersAutoDict- DE: "\001\010"- 00:06:17.769 [2024-09-30 21:50:01.946759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe23 cdw11:00000000 00:06:17.769 [2024-09-30 21:50:01.946784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.769 [2024-09-30 21:50:01.946835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d4c0 cdw11:00000000 00:06:17.770 [2024-09-30 21:50:01.946848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.770 #30 NEW cov: 12367 ft: 14724 corp: 23/84b lim: 10 exec/s: 30 rss: 75Mb L: 4/9 MS: 1 ChangeBit- 00:06:17.770 [2024-09-30 21:50:02.006913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fe23 cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.006939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.770 [2024-09-30 21:50:02.006991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000008d5 cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.007008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.770 #31 NEW cov: 12367 ft: 14732 corp: 24/88b lim: 10 exec/s: 31 rss: 75Mb L: 4/9 MS: 1 CrossOver- 00:06:17.770 [2024-09-30 21:50:02.047133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.047158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.770 [2024-09-30 21:50:02.047210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000096fc cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.047224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.770 [2024-09-30 21:50:02.047273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000d6fc cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.047287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:17.770 #32 NEW cov: 12367 ft: 14750 corp: 25/94b lim: 10 exec/s: 32 rss: 75Mb L: 6/9 MS: 1 ChangeBit- 00:06:17.770 [2024-09-30 21:50:02.107221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.107247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:17.770 [2024-09-30 21:50:02.107299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:17.770 [2024-09-30 21:50:02.107319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:17.770 #33 NEW cov: 12367 ft: 14760 corp: 26/99b lim: 10 exec/s: 33 rss: 75Mb L: 5/9 MS: 1 ChangeByte- 00:06:18.029 [2024-09-30 21:50:02.147662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.147687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.147737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009c9c cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.147751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.147801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000108 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.147814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.147866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.147879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.147931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.147945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:18.029 #34 NEW cov: 12367 ft: 14812 corp: 27/109b lim: 10 exec/s: 34 rss: 75Mb L: 10/10 MS: 1 CopyPart- 00:06:18.029 [2024-09-30 21:50:02.207621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00009c01 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.207647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.207697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000801 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.207714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.207762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000801 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.207775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:18.029 #35 NEW cov: 12367 ft: 14820 corp: 28/116b lim: 10 exec/s: 35 rss: 75Mb L: 7/10 MS: 1 CrossOver- 00:06:18.029 [2024-09-30 21:50:02.267673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.267699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.267750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000024fc cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.267763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.029 #36 NEW cov: 12367 ft: 14836 corp: 29/120b lim: 10 exec/s: 36 rss: 75Mb L: 4/10 MS: 1 InsertByte- 00:06:18.029 [2024-09-30 21:50:02.307738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.307763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.029 [2024-09-30 21:50:02.307816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000d608 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.307829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.029 #37 NEW cov: 12367 ft: 14869 corp: 30/124b lim: 10 exec/s: 37 rss: 75Mb L: 4/10 MS: 1 CopyPart- 00:06:18.029 [2024-09-30 21:50:02.347749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000808 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.347774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.029 #38 NEW cov: 12367 ft: 14883 corp: 31/127b lim: 10 exec/s: 38 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:06:18.029 [2024-09-30 21:50:02.387866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000018d6 cdw11:00000000 00:06:18.029 [2024-09-30 21:50:02.387891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.289 #39 NEW cov: 12367 ft: 14912 corp: 32/130b lim: 10 exec/s: 39 rss: 75Mb L: 3/10 MS: 1 ChangeBit- 00:06:18.289 [2024-09-30 21:50:02.427997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:18.289 [2024-09-30 21:50:02.428022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.289 #40 NEW cov: 12367 ft: 14928 corp: 33/133b lim: 10 exec/s: 40 rss: 75Mb L: 3/10 MS: 1 InsertByte- 00:06:18.289 [2024-09-30 21:50:02.468124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000fefe cdw11:00000000 00:06:18.289 [2024-09-30 21:50:02.468149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.289 #41 NEW cov: 12367 ft: 14946 corp: 34/136b lim: 10 exec/s: 41 rss: 75Mb L: 3/10 MS: 1 CopyPart- 00:06:18.289 [2024-09-30 21:50:02.508333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000008d6 cdw11:00000000 00:06:18.289 [2024-09-30 21:50:02.508359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:18.289 [2024-09-30 21:50:02.508410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000024bc cdw11:00000000 00:06:18.289 [2024-09-30 21:50:02.508427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:18.289 #42 NEW cov: 12367 ft: 14985 corp: 35/140b lim: 10 exec/s: 21 rss: 75Mb L: 4/10 MS: 1 ChangeBit- 00:06:18.289 #42 DONE cov: 12367 ft: 14985 corp: 35/140b lim: 10 exec/s: 21 rss: 75Mb 00:06:18.289 ###### Recommended dictionary. ###### 00:06:18.289 "\001\010" # Uses: 3 00:06:18.289 ###### End of recommended dictionary. ###### 00:06:18.289 Done 42 runs in 2 second(s) 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:18.548 21:50:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:06:18.548 [2024-09-30 21:50:02.720448] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:18.548 [2024-09-30 21:50:02.720541] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047449 ] 00:06:18.548 [2024-09-30 21:50:02.901627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.808 [2024-09-30 21:50:02.968625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.808 [2024-09-30 21:50:03.027662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.808 [2024-09-30 21:50:03.044032] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:06:18.808 INFO: Running with entropic power schedule (0xFF, 100). 00:06:18.808 INFO: Seed: 3367246237 00:06:18.808 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:18.808 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:18.808 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:06:18.808 INFO: A corpus is not provided, starting from an empty corpus 00:06:18.808 #2 INITED exec/s: 0 rss: 65Mb 00:06:18.808 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:18.808 This may also happen if the target rejected all inputs we tried so far 00:06:18.808 [2024-09-30 21:50:03.089313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:18.808 [2024-09-30 21:50:03.089342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.067 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:06:19.067 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:19.067 #4 NEW cov: 12140 ft: 12128 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 CrossOver-CrossOver- 00:06:19.067 [2024-09-30 21:50:03.410152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.067 [2024-09-30 21:50:03.410184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.327 #5 NEW cov: 12253 ft: 12853 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:06:19.327 [2024-09-30 21:50:03.470322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.470350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.327 [2024-09-30 21:50:03.470403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.470417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.327 #6 NEW cov: 12259 ft: 13261 corp: 4/9b lim: 10 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CopyPart- 00:06:19.327 [2024-09-30 21:50:03.530595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.530620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.327 [2024-09-30 21:50:03.530673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000da0a cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.530687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.327 [2024-09-30 21:50:03.530735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c80a cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.530749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:19.327 #7 NEW cov: 12344 ft: 13658 corp: 5/16b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:06:19.327 [2024-09-30 21:50:03.590537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.590562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.327 #8 NEW cov: 12344 ft: 13847 corp: 6/19b lim: 10 exec/s: 0 rss: 73Mb L: 3/7 MS: 1 CopyPart- 00:06:19.327 [2024-09-30 21:50:03.630755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.630781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.327 [2024-09-30 21:50:03.630833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005cc8 cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.630847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.327 #9 NEW cov: 12344 ft: 13901 corp: 7/23b lim: 10 exec/s: 0 rss: 73Mb L: 4/7 MS: 1 ChangeByte- 00:06:19.327 [2024-09-30 21:50:03.670745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00006a0a cdw11:00000000 00:06:19.327 [2024-09-30 21:50:03.670770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.327 #12 NEW cov: 12344 ft: 14051 corp: 8/25b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 3 ChangeBit-ChangeBit-CrossOver- 00:06:19.586 [2024-09-30 21:50:03.710889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:19.586 [2024-09-30 21:50:03.710915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.586 #13 NEW cov: 12344 ft: 14078 corp: 9/28b lim: 10 exec/s: 0 rss: 73Mb L: 3/7 MS: 1 ChangeBit- 00:06:19.586 [2024-09-30 21:50:03.771013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.586 [2024-09-30 21:50:03.771038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.586 #14 NEW cov: 12344 ft: 14132 corp: 10/30b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 EraseBytes- 00:06:19.586 [2024-09-30 21:50:03.831321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:19.586 [2024-09-30 21:50:03.831347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.586 [2024-09-30 21:50:03.831399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:06:19.586 [2024-09-30 21:50:03.831413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.586 #15 NEW cov: 12344 ft: 14154 corp: 11/35b lim: 10 exec/s: 0 rss: 73Mb L: 5/7 MS: 1 CMP- DE: "\377\377"- 00:06:19.586 [2024-09-30 21:50:03.891488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:19.587 [2024-09-30 21:50:03.891514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.587 [2024-09-30 21:50:03.891566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:19.587 [2024-09-30 21:50:03.891580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.587 #16 NEW cov: 12344 ft: 14178 corp: 12/40b lim: 10 exec/s: 0 rss: 74Mb L: 5/7 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:19.587 [2024-09-30 21:50:03.931467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:19.587 [2024-09-30 21:50:03.931493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.587 #17 NEW cov: 12344 ft: 14238 corp: 13/43b lim: 10 exec/s: 0 rss: 74Mb L: 3/7 MS: 1 ChangeByte- 00:06:19.846 [2024-09-30 21:50:03.971608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:03.971633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.846 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:19.846 #18 NEW cov: 12367 ft: 14269 corp: 14/45b lim: 10 exec/s: 0 rss: 74Mb L: 2/7 MS: 1 EraseBytes- 00:06:19.846 [2024-09-30 21:50:04.011815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.011840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.846 [2024-09-30 21:50:04.011892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005cc8 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.011906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.846 #19 NEW cov: 12367 ft: 14281 corp: 15/49b lim: 10 exec/s: 0 rss: 74Mb L: 4/7 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:19.846 [2024-09-30 21:50:04.051993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.052018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.846 [2024-09-30 21:50:04.052071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.052084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.846 #22 NEW cov: 12367 ft: 14292 corp: 16/54b lim: 10 exec/s: 0 rss: 74Mb L: 5/7 MS: 3 EraseBytes-CrossOver-InsertRepeatedBytes- 00:06:19.846 [2024-09-30 21:50:04.091916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001313 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.091941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.846 #25 NEW cov: 12367 ft: 14308 corp: 17/56b lim: 10 exec/s: 25 rss: 74Mb L: 2/7 MS: 3 ShuffleBytes-ChangeBinInt-CopyPart- 00:06:19.846 [2024-09-30 21:50:04.132156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ac8 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.132182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.846 [2024-09-30 21:50:04.132233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005cc8 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.132247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:19.846 #26 NEW cov: 12367 ft: 14312 corp: 18/61b lim: 10 exec/s: 26 rss: 74Mb L: 5/7 MS: 1 InsertByte- 00:06:19.846 [2024-09-30 21:50:04.172166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.172193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:19.846 #30 NEW cov: 12367 ft: 14325 corp: 19/63b lim: 10 exec/s: 30 rss: 74Mb L: 2/7 MS: 4 EraseBytes-CrossOver-CopyPart-CrossOver- 00:06:19.846 [2024-09-30 21:50:04.212351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af9 cdw11:00000000 00:06:19.846 [2024-09-30 21:50:04.212378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.105 #31 NEW cov: 12367 ft: 14347 corp: 20/65b lim: 10 exec/s: 31 rss: 74Mb L: 2/7 MS: 1 EraseBytes- 00:06:20.105 [2024-09-30 21:50:04.272467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:00000000 00:06:20.105 [2024-09-30 21:50:04.272494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.105 #32 NEW cov: 12367 ft: 14357 corp: 21/67b lim: 10 exec/s: 32 rss: 74Mb L: 2/7 MS: 1 CopyPart- 00:06:20.105 [2024-09-30 21:50:04.332754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:20.105 [2024-09-30 21:50:04.332780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.105 [2024-09-30 21:50:04.332832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a1a cdw11:00000000 00:06:20.105 [2024-09-30 21:50:04.332846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.105 #33 NEW cov: 12367 ft: 14362 corp: 22/71b lim: 10 exec/s: 33 rss: 74Mb L: 4/7 MS: 1 CopyPart- 00:06:20.105 [2024-09-30 21:50:04.372765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000da44 cdw11:00000000 00:06:20.105 [2024-09-30 21:50:04.372790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.105 #34 NEW cov: 12367 ft: 14406 corp: 23/73b lim: 10 exec/s: 34 rss: 74Mb L: 2/7 MS: 1 ChangeByte- 00:06:20.105 [2024-09-30 21:50:04.433041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:20.105 [2024-09-30 21:50:04.433068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.105 [2024-09-30 21:50:04.433122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c80a cdw11:00000000 00:06:20.106 [2024-09-30 21:50:04.433136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.106 #35 NEW cov: 12367 ft: 14500 corp: 24/78b lim: 10 exec/s: 35 rss: 74Mb L: 5/7 MS: 1 CrossOver- 00:06:20.365 [2024-09-30 21:50:04.493205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a07 cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.493231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.365 [2024-09-30 21:50:04.493284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000c8 cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.493298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.365 #36 NEW cov: 12367 ft: 14525 corp: 25/82b lim: 10 exec/s: 36 rss: 74Mb L: 4/7 MS: 1 CMP- DE: "\007\000"- 00:06:20.365 [2024-09-30 21:50:04.553220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.553245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.365 #37 NEW cov: 12367 ft: 14535 corp: 26/85b lim: 10 exec/s: 37 rss: 74Mb L: 3/7 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:20.365 [2024-09-30 21:50:04.593372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.593398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.365 #38 NEW cov: 12367 ft: 14558 corp: 27/88b lim: 10 exec/s: 38 rss: 74Mb L: 3/7 MS: 1 CrossOver- 00:06:20.365 [2024-09-30 21:50:04.633481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000af9 cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.633507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.365 #40 NEW cov: 12367 ft: 14568 corp: 28/91b lim: 10 exec/s: 40 rss: 74Mb L: 3/7 MS: 2 EraseBytes-CrossOver- 00:06:20.365 [2024-09-30 21:50:04.693895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.693921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.365 [2024-09-30 21:50:04.693973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000ff cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.693986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.365 [2024-09-30 21:50:04.694035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff00 cdw11:00000000 00:06:20.365 [2024-09-30 21:50:04.694048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.625 #41 NEW cov: 12367 ft: 14588 corp: 29/98b lim: 10 exec/s: 41 rss: 74Mb L: 7/7 MS: 1 PersAutoDict- DE: "\377\377"- 00:06:20.625 [2024-09-30 21:50:04.754189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002500 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.754214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.754264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.754277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.754328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.754341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.754391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000000ff cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.754404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.625 #42 NEW cov: 12367 ft: 14803 corp: 30/106b lim: 10 exec/s: 42 rss: 74Mb L: 8/8 MS: 1 InsertByte- 00:06:20.625 [2024-09-30 21:50:04.814343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.814369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.814421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c80a cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.814434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.814485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c818 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.814498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.814545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00001818 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.814557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:20.625 #43 NEW cov: 12367 ft: 14812 corp: 31/115b lim: 10 exec/s: 43 rss: 75Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:06:20.625 [2024-09-30 21:50:04.874258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.874284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.625 [2024-09-30 21:50:04.874339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.874353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.625 #44 NEW cov: 12367 ft: 14817 corp: 32/120b lim: 10 exec/s: 44 rss: 75Mb L: 5/9 MS: 1 CrossOver- 00:06:20.625 [2024-09-30 21:50:04.914289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f6f4 cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.914318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.625 #45 NEW cov: 12367 ft: 14888 corp: 33/123b lim: 10 exec/s: 45 rss: 75Mb L: 3/9 MS: 1 ChangeBinInt- 00:06:20.625 [2024-09-30 21:50:04.974437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2b cdw11:00000000 00:06:20.625 [2024-09-30 21:50:04.974462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.885 #46 NEW cov: 12367 ft: 14900 corp: 34/125b lim: 10 exec/s: 46 rss: 75Mb L: 2/9 MS: 1 InsertByte- 00:06:20.885 [2024-09-30 21:50:05.014672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:06:20.885 [2024-09-30 21:50:05.014697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.885 [2024-09-30 21:50:05.014748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000af9 cdw11:00000000 00:06:20.885 [2024-09-30 21:50:05.014762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:20.885 #47 NEW cov: 12367 ft: 14905 corp: 35/129b lim: 10 exec/s: 47 rss: 75Mb L: 4/9 MS: 1 CrossOver- 00:06:20.885 [2024-09-30 21:50:05.074712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000dada cdw11:00000000 00:06:20.885 [2024-09-30 21:50:05.074737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:20.885 #48 NEW cov: 12367 ft: 14911 corp: 36/131b lim: 10 exec/s: 24 rss: 75Mb L: 2/9 MS: 1 CopyPart- 00:06:20.885 #48 DONE cov: 12367 ft: 14911 corp: 36/131b lim: 10 exec/s: 24 rss: 75Mb 00:06:20.885 ###### Recommended dictionary. ###### 00:06:20.885 "\377\377" # Uses: 4 00:06:20.885 "\007\000" # Uses: 0 00:06:20.885 ###### End of recommended dictionary. ###### 00:06:20.885 Done 48 runs in 2 second(s) 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:20.885 21:50:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:06:21.145 [2024-09-30 21:50:05.265603] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:21.145 [2024-09-30 21:50:05.265687] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047937 ] 00:06:21.145 [2024-09-30 21:50:05.442305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.145 [2024-09-30 21:50:05.507316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.404 [2024-09-30 21:50:05.566080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.404 [2024-09-30 21:50:05.582448] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:06:21.404 INFO: Running with entropic power schedule (0xFF, 100). 00:06:21.404 INFO: Seed: 1610273377 00:06:21.404 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:21.404 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:21.404 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:06:21.404 INFO: A corpus is not provided, starting from an empty corpus 00:06:21.404 [2024-09-30 21:50:05.627868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.404 [2024-09-30 21:50:05.627897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.404 #2 INITED cov: 12160 ft: 12134 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:21.404 [2024-09-30 21:50:05.668053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.404 [2024-09-30 21:50:05.668080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.404 [2024-09-30 21:50:05.668140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.404 [2024-09-30 21:50:05.668155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.404 #3 NEW cov: 12281 ft: 13439 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:06:21.404 [2024-09-30 21:50:05.728210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.404 [2024-09-30 21:50:05.728237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.404 [2024-09-30 21:50:05.728298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.404 [2024-09-30 21:50:05.728319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.404 #4 NEW cov: 12287 ft: 13755 corp: 3/5b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:06:21.663 [2024-09-30 21:50:05.788377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.663 [2024-09-30 21:50:05.788403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.663 [2024-09-30 21:50:05.788463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.663 [2024-09-30 21:50:05.788477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.663 #5 NEW cov: 12372 ft: 14079 corp: 4/7b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:06:21.663 [2024-09-30 21:50:05.848361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.663 [2024-09-30 21:50:05.848386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.663 #6 NEW cov: 12372 ft: 14252 corp: 5/8b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 EraseBytes- 00:06:21.663 [2024-09-30 21:50:05.908741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.663 [2024-09-30 21:50:05.908767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.663 [2024-09-30 21:50:05.908826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.663 [2024-09-30 21:50:05.908841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.663 #7 NEW cov: 12372 ft: 14303 corp: 6/10b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:06:21.663 [2024-09-30 21:50:05.949328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:05.949354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.664 [2024-09-30 21:50:05.949414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:05.949427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.664 [2024-09-30 21:50:05.949487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:05.949501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.664 [2024-09-30 21:50:05.949557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:05.949571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:21.664 [2024-09-30 21:50:05.949631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:05.949644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:21.664 #8 NEW cov: 12372 ft: 14683 corp: 7/15b lim: 5 exec/s: 0 rss: 72Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377\036"- 00:06:21.664 [2024-09-30 21:50:06.009033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:06.009060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.664 [2024-09-30 21:50:06.009122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.664 [2024-09-30 21:50:06.009136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.664 #9 NEW cov: 12372 ft: 14781 corp: 8/17b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 CopyPart- 00:06:21.923 [2024-09-30 21:50:06.048927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.048952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.923 #10 NEW cov: 12372 ft: 14945 corp: 9/18b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:06:21.923 [2024-09-30 21:50:06.089219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.089249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.923 [2024-09-30 21:50:06.089313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.089328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.923 #11 NEW cov: 12372 ft: 14984 corp: 10/20b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:21.923 [2024-09-30 21:50:06.129526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.129551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.923 [2024-09-30 21:50:06.129611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.129625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.923 [2024-09-30 21:50:06.129692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.129712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:21.923 #12 NEW cov: 12372 ft: 15159 corp: 11/23b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 InsertByte- 00:06:21.923 [2024-09-30 21:50:06.189547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.189574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.923 [2024-09-30 21:50:06.189634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.189648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:21.923 #13 NEW cov: 12372 ft: 15241 corp: 12/25b lim: 5 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeByte- 00:06:21.923 [2024-09-30 21:50:06.249515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:21.923 [2024-09-30 21:50:06.249541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:21.923 #14 NEW cov: 12372 ft: 15270 corp: 13/26b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBit- 00:06:22.183 [2024-09-30 21:50:06.309787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.309813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.183 [2024-09-30 21:50:06.309874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.309887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.183 #15 NEW cov: 12372 ft: 15300 corp: 14/28b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:06:22.183 [2024-09-30 21:50:06.349794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.349822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.183 #16 NEW cov: 12372 ft: 15339 corp: 15/29b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 ChangeBinInt- 00:06:22.183 [2024-09-30 21:50:06.389896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.389921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.183 #17 NEW cov: 12372 ft: 15349 corp: 16/30b lim: 5 exec/s: 0 rss: 73Mb L: 1/5 MS: 1 CrossOver- 00:06:22.183 [2024-09-30 21:50:06.450241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.450266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.183 [2024-09-30 21:50:06.450328] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.450342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.183 #18 NEW cov: 12372 ft: 15413 corp: 17/32b lim: 5 exec/s: 0 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:06:22.183 [2024-09-30 21:50:06.490730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.490756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.183 [2024-09-30 21:50:06.490816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.490831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.183 [2024-09-30 21:50:06.490891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.490905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.183 [2024-09-30 21:50:06.490963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.183 [2024-09-30 21:50:06.490977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.442 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:22.442 #19 NEW cov: 12395 ft: 15459 corp: 18/36b lim: 5 exec/s: 19 rss: 74Mb L: 4/5 MS: 1 InsertByte- 00:06:22.702 [2024-09-30 21:50:06.811321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.811354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.811416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.811431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.702 #20 NEW cov: 12395 ft: 15478 corp: 19/38b lim: 5 exec/s: 20 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:22.702 [2024-09-30 21:50:06.871648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.871678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.871739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.871754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.871813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.871827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.871884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.871898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.702 #21 NEW cov: 12395 ft: 15494 corp: 20/42b lim: 5 exec/s: 21 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:06:22.702 [2024-09-30 21:50:06.931485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.931511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.931571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.931585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.702 #22 NEW cov: 12395 ft: 15542 corp: 21/44b lim: 5 exec/s: 22 rss: 74Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:22.702 [2024-09-30 21:50:06.992146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.992174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.992231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.992245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.992303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.992323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.992379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.992392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:06.992452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:06.992466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:22.702 #23 NEW cov: 12395 ft: 15567 corp: 22/49b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 CMP- DE: "\000\000"- 00:06:22.702 [2024-09-30 21:50:07.032062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:07.032092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:07.032152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:07.032166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:07.032226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:07.032240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.702 [2024-09-30 21:50:07.032298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.702 [2024-09-30 21:50:07.032318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.962 #24 NEW cov: 12395 ft: 15582 corp: 23/53b lim: 5 exec/s: 24 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:06:22.962 [2024-09-30 21:50:07.091911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.091936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.962 [2024-09-30 21:50:07.092013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.092027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.962 #25 NEW cov: 12395 ft: 15603 corp: 24/55b lim: 5 exec/s: 25 rss: 74Mb L: 2/5 MS: 1 InsertByte- 00:06:22.962 [2024-09-30 21:50:07.132034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.132059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.962 [2024-09-30 21:50:07.132119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.132133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.962 #26 NEW cov: 12395 ft: 15609 corp: 25/57b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:06:22.962 [2024-09-30 21:50:07.171963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.171988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.962 #27 NEW cov: 12395 ft: 15648 corp: 26/58b lim: 5 exec/s: 27 rss: 74Mb L: 1/5 MS: 1 ChangeByte- 00:06:22.962 [2024-09-30 21:50:07.212581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.212607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.962 [2024-09-30 21:50:07.212667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.212684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:22.962 [2024-09-30 21:50:07.212744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.212758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:22.962 [2024-09-30 21:50:07.212816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.212830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:22.962 #28 NEW cov: 12395 ft: 15663 corp: 27/62b lim: 5 exec/s: 28 rss: 75Mb L: 4/5 MS: 1 ShuffleBytes- 00:06:22.962 [2024-09-30 21:50:07.272297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:22.962 [2024-09-30 21:50:07.272336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:22.962 #29 NEW cov: 12395 ft: 15664 corp: 28/63b lim: 5 exec/s: 29 rss: 75Mb L: 1/5 MS: 1 ChangeBit- 00:06:23.245 [2024-09-30 21:50:07.332629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.332656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.332717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.332731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.245 #30 NEW cov: 12395 ft: 15672 corp: 29/65b lim: 5 exec/s: 30 rss: 75Mb L: 2/5 MS: 1 ChangeBit- 00:06:23.245 [2024-09-30 21:50:07.372533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.372559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.245 #31 NEW cov: 12395 ft: 15712 corp: 30/66b lim: 5 exec/s: 31 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:06:23.245 [2024-09-30 21:50:07.412817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.412843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.412902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.412916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.245 #32 NEW cov: 12395 ft: 15718 corp: 31/68b lim: 5 exec/s: 32 rss: 75Mb L: 2/5 MS: 1 CopyPart- 00:06:23.245 [2024-09-30 21:50:07.452980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.453006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.453063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.453077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.245 #33 NEW cov: 12395 ft: 15730 corp: 32/70b lim: 5 exec/s: 33 rss: 75Mb L: 2/5 MS: 1 ShuffleBytes- 00:06:23.245 [2024-09-30 21:50:07.513664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.513691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.513750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.513764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.513823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.513837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.513895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.513909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.513967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.513981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.245 #34 NEW cov: 12395 ft: 15734 corp: 33/75b lim: 5 exec/s: 34 rss: 75Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:23.245 [2024-09-30 21:50:07.553732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.553758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.553818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.553832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.553891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.553905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.553964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.553978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:23.245 [2024-09-30 21:50:07.554036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.245 [2024-09-30 21:50:07.554049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:23.245 #35 NEW cov: 12395 ft: 15752 corp: 34/80b lim: 5 exec/s: 35 rss: 75Mb L: 5/5 MS: 1 InsertByte- 00:06:23.245 [2024-09-30 21:50:07.613427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.504 [2024-09-30 21:50:07.613452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:23.504 [2024-09-30 21:50:07.613517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:23.504 [2024-09-30 21:50:07.613532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:23.504 #36 NEW cov: 12395 ft: 15761 corp: 35/82b lim: 5 exec/s: 18 rss: 75Mb L: 2/5 MS: 1 InsertByte- 00:06:23.504 #36 DONE cov: 12395 ft: 15761 corp: 35/82b lim: 5 exec/s: 18 rss: 75Mb 00:06:23.504 ###### Recommended dictionary. ###### 00:06:23.504 "\377\377\377\036" # Uses: 0 00:06:23.504 "\000\000" # Uses: 0 00:06:23.504 ###### End of recommended dictionary. ###### 00:06:23.504 Done 36 runs in 2 second(s) 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:23.504 21:50:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:06:23.504 [2024-09-30 21:50:07.807179] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:23.504 [2024-09-30 21:50:07.807253] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048472 ] 00:06:23.764 [2024-09-30 21:50:07.987005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.764 [2024-09-30 21:50:08.052216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.764 [2024-09-30 21:50:08.111003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.764 [2024-09-30 21:50:08.127417] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:06:24.023 INFO: Running with entropic power schedule (0xFF, 100). 00:06:24.023 INFO: Seed: 4153270688 00:06:24.023 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:24.023 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:24.023 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:06:24.023 INFO: A corpus is not provided, starting from an empty corpus 00:06:24.023 [2024-09-30 21:50:08.175981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.023 [2024-09-30 21:50:08.176012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.023 #2 INITED cov: 12131 ft: 12149 corp: 1/1b exec/s: 0 rss: 71Mb 00:06:24.023 [2024-09-30 21:50:08.216011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.023 [2024-09-30 21:50:08.216037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.282 NEW_FUNC[1/3]: 0x1bf0e88 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:589 00:06:24.282 NEW_FUNC[2/3]: 0x1bf24d8 in reactor_post_process_lw_thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:917 00:06:24.282 #3 NEW cov: 12281 ft: 12848 corp: 2/2b lim: 5 exec/s: 0 rss: 73Mb L: 1/1 MS: 1 CopyPart- 00:06:24.283 [2024-09-30 21:50:08.527386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.527417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.527471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.527485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.527536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.527549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.527602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.527615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.527666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.527680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.283 #4 NEW cov: 12287 ft: 13794 corp: 3/7b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:06:24.283 [2024-09-30 21:50:08.567392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.567418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.567473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.567486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.567538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.567554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.567606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.567619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.567672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.567685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.283 #5 NEW cov: 12372 ft: 14054 corp: 4/12b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:24.283 [2024-09-30 21:50:08.627558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.627584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.627637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.627650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.627705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.627719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.627770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.627784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.283 [2024-09-30 21:50:08.627837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.283 [2024-09-30 21:50:08.627850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.542 #6 NEW cov: 12372 ft: 14095 corp: 5/17b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:06:24.542 [2024-09-30 21:50:08.687746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.542 [2024-09-30 21:50:08.687770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.542 [2024-09-30 21:50:08.687826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.687839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.687890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.687903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.687954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.687967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.688022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.688035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.543 #7 NEW cov: 12372 ft: 14144 corp: 6/22b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 CopyPart- 00:06:24.543 [2024-09-30 21:50:08.727492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.727517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.727571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.727585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.727636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.727650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.543 #8 NEW cov: 12372 ft: 14418 corp: 7/25b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:06:24.543 [2024-09-30 21:50:08.767947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.767973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.768028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.768042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.768095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.768108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.768160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.768174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.768228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.768242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.543 #9 NEW cov: 12372 ft: 14471 corp: 8/30b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeBit- 00:06:24.543 [2024-09-30 21:50:08.807746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.807772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.807822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.807839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.807905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.807921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.543 #10 NEW cov: 12372 ft: 14650 corp: 9/33b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 EraseBytes- 00:06:24.543 [2024-09-30 21:50:08.868198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.868223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.868278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.868293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.868350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.868364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.868416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.868429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.543 [2024-09-30 21:50:08.868480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.543 [2024-09-30 21:50:08.868494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.543 #11 NEW cov: 12372 ft: 14742 corp: 10/38b lim: 5 exec/s: 0 rss: 73Mb L: 5/5 MS: 1 ChangeByte- 00:06:24.802 [2024-09-30 21:50:08.928074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.802 [2024-09-30 21:50:08.928099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.802 [2024-09-30 21:50:08.928153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.802 [2024-09-30 21:50:08.928167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.802 [2024-09-30 21:50:08.928218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.802 [2024-09-30 21:50:08.928232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.802 #12 NEW cov: 12372 ft: 14783 corp: 11/41b lim: 5 exec/s: 0 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:06:24.802 [2024-09-30 21:50:08.988555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.802 [2024-09-30 21:50:08.988580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.802 [2024-09-30 21:50:08.988635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.802 [2024-09-30 21:50:08.988657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.802 [2024-09-30 21:50:08.988708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:08.988721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:08.988771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:08.988784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:08.988836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:08.988849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.803 #13 NEW cov: 12372 ft: 14814 corp: 12/46b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:06:24.803 [2024-09-30 21:50:09.048552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.048578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.048629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.048643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.048695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.048709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.048760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.048774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.803 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:24.803 #14 NEW cov: 12395 ft: 14936 corp: 13/50b lim: 5 exec/s: 0 rss: 74Mb L: 4/5 MS: 1 InsertByte- 00:06:24.803 [2024-09-30 21:50:09.108866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.108891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.108941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.108956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.109024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.109044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.109117] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.109142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.109212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.109234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:24.803 #15 NEW cov: 12395 ft: 15022 corp: 14/55b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:24.803 [2024-09-30 21:50:09.148975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.149000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.149053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.149067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.149120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.149133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.149185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.149199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:24.803 [2024-09-30 21:50:09.149252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:24.803 [2024-09-30 21:50:09.149265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.062 #16 NEW cov: 12395 ft: 15032 corp: 15/60b lim: 5 exec/s: 16 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:25.062 [2024-09-30 21:50:09.208852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.208877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.208930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.208944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.208995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.209008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.062 #17 NEW cov: 12395 ft: 15099 corp: 16/63b lim: 5 exec/s: 17 rss: 74Mb L: 3/5 MS: 1 ChangeBit- 00:06:25.062 [2024-09-30 21:50:09.249196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.249221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.249279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.249292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.249350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.249363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.249416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.249429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.249482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.249495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.062 #18 NEW cov: 12395 ft: 15151 corp: 17/68b lim: 5 exec/s: 18 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:06:25.062 [2024-09-30 21:50:09.289318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.289343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.289397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.289412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.289466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.289479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.289532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.289545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.289596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.289610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.062 #19 NEW cov: 12395 ft: 15188 corp: 18/73b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:25.062 [2024-09-30 21:50:09.349550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.349575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.349629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.349643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.062 [2024-09-30 21:50:09.349699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.062 [2024-09-30 21:50:09.349712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.349762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.349776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.349829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.349842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.063 #20 NEW cov: 12395 ft: 15208 corp: 19/78b lim: 5 exec/s: 20 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:25.063 [2024-09-30 21:50:09.389455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.389482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.389533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.389547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.389614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.389632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.389704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.389724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.063 #21 NEW cov: 12395 ft: 15237 corp: 20/82b lim: 5 exec/s: 21 rss: 74Mb L: 4/5 MS: 1 InsertByte- 00:06:25.063 [2024-09-30 21:50:09.429601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.429627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.429682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.429697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.429751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.429765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.063 [2024-09-30 21:50:09.429817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.063 [2024-09-30 21:50:09.429831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.322 #22 NEW cov: 12395 ft: 15263 corp: 21/86b lim: 5 exec/s: 22 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:06:25.322 [2024-09-30 21:50:09.489924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.489949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.490005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.490018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.490068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.490082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.490135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.490149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.490199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.490213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.322 #23 NEW cov: 12395 ft: 15274 corp: 22/91b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:06:25.322 [2024-09-30 21:50:09.529891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.529915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.529970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.529984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.530035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.530048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.530099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.530113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.322 #24 NEW cov: 12395 ft: 15280 corp: 23/95b lim: 5 exec/s: 24 rss: 74Mb L: 4/5 MS: 1 InsertByte- 00:06:25.322 [2024-09-30 21:50:09.569834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.569860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.569914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.569928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.569984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.569997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.322 #25 NEW cov: 12395 ft: 15306 corp: 24/98b lim: 5 exec/s: 25 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:06:25.322 [2024-09-30 21:50:09.609801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.609826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.609879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.609893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.322 #26 NEW cov: 12395 ft: 15479 corp: 25/100b lim: 5 exec/s: 26 rss: 74Mb L: 2/5 MS: 1 EraseBytes- 00:06:25.322 [2024-09-30 21:50:09.670461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.670487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.670538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.670552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.670604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.670618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.670672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.670685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.322 [2024-09-30 21:50:09.670736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.322 [2024-09-30 21:50:09.670749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.582 #27 NEW cov: 12395 ft: 15491 corp: 26/105b lim: 5 exec/s: 27 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:06:25.582 [2024-09-30 21:50:09.710255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.710280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.710337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.710351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.710403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.710415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.582 #28 NEW cov: 12395 ft: 15513 corp: 27/108b lim: 5 exec/s: 28 rss: 74Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:25.582 [2024-09-30 21:50:09.770445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.770468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.770538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.770552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.770605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.770617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.582 #29 NEW cov: 12395 ft: 15523 corp: 28/111b lim: 5 exec/s: 29 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:06:25.582 [2024-09-30 21:50:09.810855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.810879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.810934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.810947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.810999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.811012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.811062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.811075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.811126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.811139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.582 #30 NEW cov: 12395 ft: 15544 corp: 29/116b lim: 5 exec/s: 30 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:06:25.582 [2024-09-30 21:50:09.850774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.850799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.850854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.850868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.850923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.850940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.850990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.851003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.582 #31 NEW cov: 12395 ft: 15554 corp: 30/120b lim: 5 exec/s: 31 rss: 74Mb L: 4/5 MS: 1 CopyPart- 00:06:25.582 [2024-09-30 21:50:09.911094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.911119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.911174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.911187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.911239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.911252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.582 [2024-09-30 21:50:09.911305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.582 [2024-09-30 21:50:09.911322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:25.583 [2024-09-30 21:50:09.911376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.583 [2024-09-30 21:50:09.911389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:25.842 #32 NEW cov: 12395 ft: 15561 corp: 31/125b lim: 5 exec/s: 32 rss: 74Mb L: 5/5 MS: 1 InsertByte- 00:06:25.842 [2024-09-30 21:50:09.971008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:09.971032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.842 [2024-09-30 21:50:09.971082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:09.971095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.842 [2024-09-30 21:50:09.971148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:09.971161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.842 #33 NEW cov: 12395 ft: 15573 corp: 32/128b lim: 5 exec/s: 33 rss: 75Mb L: 3/5 MS: 1 ShuffleBytes- 00:06:25.842 [2024-09-30 21:50:10.011112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:10.011137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.842 [2024-09-30 21:50:10.011193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:10.011210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.842 [2024-09-30 21:50:10.011265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:10.011279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.842 #34 NEW cov: 12395 ft: 15592 corp: 33/131b lim: 5 exec/s: 34 rss: 75Mb L: 3/5 MS: 1 ChangeBit- 00:06:25.842 [2024-09-30 21:50:10.081020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.842 [2024-09-30 21:50:10.081053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.843 #35 NEW cov: 12395 ft: 15605 corp: 34/132b lim: 5 exec/s: 35 rss: 75Mb L: 1/5 MS: 1 ChangeByte- 00:06:25.843 [2024-09-30 21:50:10.121396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.843 [2024-09-30 21:50:10.121427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.843 [2024-09-30 21:50:10.121485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.843 [2024-09-30 21:50:10.121500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.843 [2024-09-30 21:50:10.121551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.843 [2024-09-30 21:50:10.121565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:25.843 #36 NEW cov: 12395 ft: 15609 corp: 35/135b lim: 5 exec/s: 36 rss: 75Mb L: 3/5 MS: 1 CrossOver- 00:06:25.843 [2024-09-30 21:50:10.161369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.843 [2024-09-30 21:50:10.161395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:25.843 [2024-09-30 21:50:10.161451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:25.843 [2024-09-30 21:50:10.161465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:25.843 #37 NEW cov: 12395 ft: 15617 corp: 36/137b lim: 5 exec/s: 18 rss: 75Mb L: 2/5 MS: 1 EraseBytes- 00:06:25.843 #37 DONE cov: 12395 ft: 15617 corp: 36/137b lim: 5 exec/s: 18 rss: 75Mb 00:06:25.843 Done 37 runs in 2 second(s) 00:06:26.101 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:06:26.101 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:26.102 21:50:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:06:26.102 [2024-09-30 21:50:10.355245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:26.102 [2024-09-30 21:50:10.355332] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048767 ] 00:06:26.361 [2024-09-30 21:50:10.544602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.361 [2024-09-30 21:50:10.612078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.361 [2024-09-30 21:50:10.671562] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.361 [2024-09-30 21:50:10.687957] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:06:26.361 INFO: Running with entropic power schedule (0xFF, 100). 00:06:26.361 INFO: Seed: 2419311009 00:06:26.620 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:26.620 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:26.620 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:06:26.620 INFO: A corpus is not provided, starting from an empty corpus 00:06:26.620 #2 INITED exec/s: 0 rss: 65Mb 00:06:26.620 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:26.620 This may also happen if the target rejected all inputs we tried so far 00:06:26.620 [2024-09-30 21:50:10.758172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0e7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.620 [2024-09-30 21:50:10.758210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.620 [2024-09-30 21:50:10.758348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.620 [2024-09-30 21:50:10.758365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.620 [2024-09-30 21:50:10.758495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.620 [2024-09-30 21:50:10.758513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.879 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:06:26.879 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:26.879 #5 NEW cov: 12191 ft: 12188 corp: 2/31b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:06:26.879 [2024-09-30 21:50:11.099095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.099133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.099260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.099274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.099404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.099422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.879 #6 NEW cov: 12304 ft: 12763 corp: 3/61b lim: 40 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:06:26.879 [2024-09-30 21:50:11.139108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.139137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.139269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.139288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.139423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.139442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.879 #10 NEW cov: 12310 ft: 13094 corp: 4/85b lim: 40 exec/s: 0 rss: 73Mb L: 24/30 MS: 4 InsertRepeatedBytes-EraseBytes-ChangeByte-CrossOver- 00:06:26.879 [2024-09-30 21:50:11.179121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b0e7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.179149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.179260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.179276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.179409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.179425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:26.879 #12 NEW cov: 12395 ft: 13445 corp: 5/110b lim: 40 exec/s: 0 rss: 73Mb L: 25/30 MS: 2 ChangeByte-CrossOver- 00:06:26.879 [2024-09-30 21:50:11.219351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.219379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.219516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.219535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:26.879 [2024-09-30 21:50:11.219665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:26.879 [2024-09-30 21:50:11.219681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.139 #13 NEW cov: 12395 ft: 13587 corp: 6/135b lim: 40 exec/s: 0 rss: 73Mb L: 25/30 MS: 1 InsertByte- 00:06:27.139 [2024-09-30 21:50:11.279440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.279467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.279586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.279604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.279735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffff0aff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.279752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.139 #14 NEW cov: 12395 ft: 13647 corp: 7/160b lim: 40 exec/s: 0 rss: 73Mb L: 25/30 MS: 1 CrossOver- 00:06:27.139 [2024-09-30 21:50:11.319712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.319740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.319863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.319880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.320013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.320028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.320152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.320169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.139 #19 NEW cov: 12395 ft: 14142 corp: 8/194b lim: 40 exec/s: 0 rss: 73Mb L: 34/34 MS: 5 ChangeBit-ChangeByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:06:27.139 [2024-09-30 21:50:11.369800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.369827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.369928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.369942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.370065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffffb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.370080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.139 #20 NEW cov: 12395 ft: 14182 corp: 9/219b lim: 40 exec/s: 0 rss: 73Mb L: 25/34 MS: 1 ChangeBit- 00:06:27.139 [2024-09-30 21:50:11.440159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b0e7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.440189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.440313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.440330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.440483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.440499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.139 [2024-09-30 21:50:11.440625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.139 [2024-09-30 21:50:11.440643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.139 #21 NEW cov: 12395 ft: 14246 corp: 10/256b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:27.398 [2024-09-30 21:50:11.510445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.510473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.510594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.510611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.510745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.510762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.510889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.510906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.399 #22 NEW cov: 12395 ft: 14298 corp: 11/289b lim: 40 exec/s: 0 rss: 73Mb L: 33/37 MS: 1 InsertRepeatedBytes- 00:06:27.399 [2024-09-30 21:50:11.580355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.580383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.580513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.580535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.580662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.580680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.399 #23 NEW cov: 12395 ft: 14307 corp: 12/314b lim: 40 exec/s: 0 rss: 74Mb L: 25/37 MS: 1 CopyPart- 00:06:27.399 [2024-09-30 21:50:11.630333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2c5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.630361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.630480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.630496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.399 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:27.399 #26 NEW cov: 12418 ft: 14589 corp: 13/337b lim: 40 exec/s: 0 rss: 74Mb L: 23/37 MS: 3 ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:06:27.399 [2024-09-30 21:50:11.680522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.680551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.680670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.680689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.399 #27 NEW cov: 12418 ft: 14650 corp: 14/356b lim: 40 exec/s: 0 rss: 74Mb L: 19/37 MS: 1 EraseBytes- 00:06:27.399 [2024-09-30 21:50:11.751066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.751093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.751225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.751243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.751369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.751387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.399 [2024-09-30 21:50:11.751507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.399 [2024-09-30 21:50:11.751523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:27.657 #28 NEW cov: 12418 ft: 14709 corp: 15/390b lim: 40 exec/s: 28 rss: 74Mb L: 34/37 MS: 1 ShuffleBytes- 00:06:27.657 [2024-09-30 21:50:11.810490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.657 [2024-09-30 21:50:11.810520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.657 #29 NEW cov: 12418 ft: 15010 corp: 16/399b lim: 40 exec/s: 29 rss: 74Mb L: 9/37 MS: 1 CrossOver- 00:06:27.657 [2024-09-30 21:50:11.860731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:fffffff7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.657 [2024-09-30 21:50:11.860759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.658 #30 NEW cov: 12418 ft: 15034 corp: 17/408b lim: 40 exec/s: 30 rss: 74Mb L: 9/37 MS: 1 ChangeBit- 00:06:27.658 [2024-09-30 21:50:11.931383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b0e7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:11.931412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.658 [2024-09-30 21:50:11.931526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:11.931543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.658 [2024-09-30 21:50:11.931669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:11.931685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.658 #31 NEW cov: 12418 ft: 15059 corp: 18/433b lim: 40 exec/s: 31 rss: 74Mb L: 25/37 MS: 1 ShuffleBytes- 00:06:27.658 [2024-09-30 21:50:11.971453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:11.971493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.658 [2024-09-30 21:50:11.971608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:11.971628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.658 [2024-09-30 21:50:11.971757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:11.971774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.658 #32 NEW cov: 12418 ft: 15069 corp: 19/458b lim: 40 exec/s: 32 rss: 74Mb L: 25/37 MS: 1 ChangeByte- 00:06:27.658 [2024-09-30 21:50:12.021464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3e3e3e3e cdw11:3e3e3e3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:12.021491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.658 [2024-09-30 21:50:12.021608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e8798ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.658 [2024-09-30 21:50:12.021630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.917 #35 NEW cov: 12418 ft: 15086 corp: 20/475b lim: 40 exec/s: 35 rss: 74Mb L: 17/37 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:06:27.917 [2024-09-30 21:50:12.071272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.071299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.917 #36 NEW cov: 12418 ft: 15127 corp: 21/488b lim: 40 exec/s: 36 rss: 74Mb L: 13/37 MS: 1 EraseBytes- 00:06:27.917 [2024-09-30 21:50:12.142076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2c5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.142105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.142238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:235b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.142257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.142386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.142406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.917 #37 NEW cov: 12418 ft: 15133 corp: 22/512b lim: 40 exec/s: 37 rss: 74Mb L: 24/37 MS: 1 InsertByte- 00:06:27.917 [2024-09-30 21:50:12.212214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798dfff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.212243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.212332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.212349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.212478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.212496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.917 #38 NEW cov: 12418 ft: 15208 corp: 23/537b lim: 40 exec/s: 38 rss: 74Mb L: 25/37 MS: 1 ChangeBit- 00:06:27.917 [2024-09-30 21:50:12.282690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.282718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.282846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.282865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.282996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff7e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.283014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:27.917 [2024-09-30 21:50:12.283151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:27.917 [2024-09-30 21:50:12.283169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.176 #39 NEW cov: 12418 ft: 15214 corp: 24/571b lim: 40 exec/s: 39 rss: 74Mb L: 34/37 MS: 1 ChangeByte- 00:06:28.176 [2024-09-30 21:50:12.352617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0e7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.352648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.352785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.352803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.352934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c7c7c15 cdw11:70b04be8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.352950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.176 #40 NEW cov: 12418 ft: 15249 corp: 25/601b lim: 40 exec/s: 40 rss: 74Mb L: 30/37 MS: 1 CMP- DE: "\025p\260K\350=f\000"- 00:06:28.176 [2024-09-30 21:50:12.422822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:2c5b5b5b cdw11:5b5b5ba5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.422848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.422976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a35b5b5b cdw11:235b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.422994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.423127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.423143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.176 #41 NEW cov: 12418 ft: 15269 corp: 26/625b lim: 40 exec/s: 41 rss: 75Mb L: 24/37 MS: 1 ChangeBinInt- 00:06:28.176 [2024-09-30 21:50:12.493096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:40ffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.493125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.493256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.493275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.493397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.493416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.176 #42 NEW cov: 12418 ft: 15270 corp: 27/650b lim: 40 exec/s: 42 rss: 75Mb L: 25/37 MS: 1 CrossOver- 00:06:28.176 [2024-09-30 21:50:12.533420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5b0e7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.533446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.533579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.533595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.533732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.533748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.176 [2024-09-30 21:50:12.533864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0000007a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.176 [2024-09-30 21:50:12.533881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.435 #43 NEW cov: 12418 ft: 15283 corp: 28/688b lim: 40 exec/s: 43 rss: 75Mb L: 38/38 MS: 1 InsertByte- 00:06:28.435 [2024-09-30 21:50:12.603587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.435 [2024-09-30 21:50:12.603616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.435 [2024-09-30 21:50:12.603750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.603768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.603889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffff7e cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.603907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.604036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.604053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.436 #44 NEW cov: 12418 ft: 15304 corp: 29/725b lim: 40 exec/s: 44 rss: 75Mb L: 37/38 MS: 1 CopyPart- 00:06:28.436 [2024-09-30 21:50:12.663679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.663705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.663825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.663842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.663955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.663972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.664096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.664113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.436 #45 NEW cov: 12418 ft: 15351 corp: 30/759b lim: 40 exec/s: 45 rss: 75Mb L: 34/38 MS: 1 ShuffleBytes- 00:06:28.436 [2024-09-30 21:50:12.703357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:8798ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.703385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.703517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.703534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.436 #46 NEW cov: 12418 ft: 15361 corp: 31/777b lim: 40 exec/s: 46 rss: 75Mb L: 18/38 MS: 1 EraseBytes- 00:06:28.436 [2024-09-30 21:50:12.753987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffff7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.754013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.754120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:7c7c7c7c cdw11:7c7c7c7c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.754137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.754262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:7c1570b0 cdw11:4be83d66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.754279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.436 [2024-09-30 21:50:12.754412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:007c7c7c cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.436 [2024-09-30 21:50:12.754429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:28.436 #47 NEW cov: 12418 ft: 15373 corp: 32/811b lim: 40 exec/s: 23 rss: 75Mb L: 34/38 MS: 1 CrossOver- 00:06:28.436 #47 DONE cov: 12418 ft: 15373 corp: 32/811b lim: 40 exec/s: 23 rss: 75Mb 00:06:28.436 ###### Recommended dictionary. ###### 00:06:28.436 "\025p\260K\350=f\000" # Uses: 0 00:06:28.436 ###### End of recommended dictionary. ###### 00:06:28.436 Done 47 runs in 2 second(s) 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:28.695 21:50:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:06:28.695 [2024-09-30 21:50:12.937007] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.695 [2024-09-30 21:50:12.937076] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049298 ] 00:06:28.955 [2024-09-30 21:50:13.113692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.955 [2024-09-30 21:50:13.177259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.955 [2024-09-30 21:50:13.235989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.955 [2024-09-30 21:50:13.252313] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:06:28.955 INFO: Running with entropic power schedule (0xFF, 100). 00:06:28.955 INFO: Seed: 688337214 00:06:28.955 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:28.955 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:28.955 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:06:28.955 INFO: A corpus is not provided, starting from an empty corpus 00:06:28.955 #2 INITED exec/s: 0 rss: 66Mb 00:06:28.955 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:28.955 This may also happen if the target rejected all inputs we tried so far 00:06:28.955 [2024-09-30 21:50:13.301466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.955 [2024-09-30 21:50:13.301494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:28.955 [2024-09-30 21:50:13.301556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.955 [2024-09-30 21:50:13.301570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:28.955 [2024-09-30 21:50:13.301626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.955 [2024-09-30 21:50:13.301640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:28.955 [2024-09-30 21:50:13.301698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:28.955 [2024-09-30 21:50:13.301712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.473 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:06:29.473 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:29.473 #3 NEW cov: 12203 ft: 12198 corp: 2/38b lim: 40 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:29.473 [2024-09-30 21:50:13.632047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.632087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.473 [2024-09-30 21:50:13.632160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.632181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.473 [2024-09-30 21:50:13.632244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.632261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.473 #5 NEW cov: 12316 ft: 13244 corp: 3/67b lim: 40 exec/s: 0 rss: 73Mb L: 29/37 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:29.473 [2024-09-30 21:50:13.691976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.692003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.473 [2024-09-30 21:50:13.692063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.692076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.473 #6 NEW cov: 12322 ft: 13684 corp: 4/86b lim: 40 exec/s: 0 rss: 73Mb L: 19/37 MS: 1 EraseBytes- 00:06:29.473 [2024-09-30 21:50:13.751952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.751977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.473 #7 NEW cov: 12407 ft: 14563 corp: 5/98b lim: 40 exec/s: 0 rss: 73Mb L: 12/37 MS: 1 EraseBytes- 00:06:29.473 [2024-09-30 21:50:13.812272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050501 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.812298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.473 [2024-09-30 21:50:13.812361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:663de926 cdw11:56fc2c05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.473 [2024-09-30 21:50:13.812374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.473 #8 NEW cov: 12407 ft: 14678 corp: 6/117b lim: 40 exec/s: 0 rss: 73Mb L: 19/37 MS: 1 CMP- DE: "\001f=\351&V\374,"- 00:06:29.732 [2024-09-30 21:50:13.852388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.732 [2024-09-30 21:50:13.852414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.732 [2024-09-30 21:50:13.852473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.732 [2024-09-30 21:50:13.852487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.732 #11 NEW cov: 12407 ft: 14741 corp: 7/138b lim: 40 exec/s: 0 rss: 73Mb L: 21/37 MS: 3 CrossOver-ChangeByte-InsertRepeatedBytes- 00:06:29.732 [2024-09-30 21:50:13.892660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.732 [2024-09-30 21:50:13.892685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.732 [2024-09-30 21:50:13.892740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.732 [2024-09-30 21:50:13.892757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.732 [2024-09-30 21:50:13.892813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.732 [2024-09-30 21:50:13.892827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.733 #12 NEW cov: 12407 ft: 14811 corp: 8/167b lim: 40 exec/s: 0 rss: 73Mb L: 29/37 MS: 1 ChangeBit- 00:06:29.733 [2024-09-30 21:50:13.932644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:10050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:13.932669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.733 [2024-09-30 21:50:13.932725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01663de9 cdw11:2656fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:13.932739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.733 #13 NEW cov: 12407 ft: 14872 corp: 9/187b lim: 40 exec/s: 0 rss: 73Mb L: 20/37 MS: 1 CrossOver- 00:06:29.733 [2024-09-30 21:50:13.972760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:10010505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:13.972786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.733 [2024-09-30 21:50:13.972847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01663de9 cdw11:2656fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:13.972860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.733 #14 NEW cov: 12407 ft: 14909 corp: 10/207b lim: 40 exec/s: 0 rss: 73Mb L: 20/37 MS: 1 ChangeBit- 00:06:29.733 [2024-09-30 21:50:14.032794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:1005fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:14.032819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.733 #15 NEW cov: 12407 ft: 14936 corp: 11/219b lim: 40 exec/s: 0 rss: 73Mb L: 12/37 MS: 1 EraseBytes- 00:06:29.733 [2024-09-30 21:50:14.073064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:14.073089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.733 [2024-09-30 21:50:14.073148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.733 [2024-09-30 21:50:14.073165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.733 #16 NEW cov: 12407 ft: 14970 corp: 12/242b lim: 40 exec/s: 0 rss: 73Mb L: 23/37 MS: 1 EraseBytes- 00:06:29.992 [2024-09-30 21:50:14.113296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:10010505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.113328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.113386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01663de9 cdw11:2656fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.113399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.113459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05393939 cdw11:3910100a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.113472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.992 #17 NEW cov: 12407 ft: 14979 corp: 13/266b lim: 40 exec/s: 0 rss: 73Mb L: 24/37 MS: 1 InsertRepeatedBytes- 00:06:29.992 [2024-09-30 21:50:14.173359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a01663d cdw11:e92656fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.173385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.173440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2c3de926 cdw11:56fc2c05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.173453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.992 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:29.992 #18 NEW cov: 12430 ft: 15091 corp: 14/285b lim: 40 exec/s: 0 rss: 74Mb L: 19/37 MS: 1 PersAutoDict- DE: "\001f=\351&V\374,"- 00:06:29.992 [2024-09-30 21:50:14.233825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:ffffff05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.233850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.233910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.233924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.233982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.233996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.234051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:05050507 cdw11:0510100a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.234065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:29.992 #19 NEW cov: 12430 ft: 15133 corp: 15/317b lim: 40 exec/s: 0 rss: 74Mb L: 32/37 MS: 1 InsertRepeatedBytes- 00:06:29.992 [2024-09-30 21:50:14.293511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:1005242c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.293535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.992 #20 NEW cov: 12430 ft: 15145 corp: 16/329b lim: 40 exec/s: 20 rss: 74Mb L: 12/37 MS: 1 ChangeByte- 00:06:29.992 [2024-09-30 21:50:14.353828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:10663de9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.353854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:29.992 [2024-09-30 21:50:14.353910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2656fc2c cdw11:0556fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:29.992 [2024-09-30 21:50:14.353924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.251 #21 NEW cov: 12430 ft: 15197 corp: 17/349b lim: 40 exec/s: 21 rss: 74Mb L: 20/37 MS: 1 CopyPart- 00:06:30.251 [2024-09-30 21:50:14.394073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.394098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.394155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.394169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.394224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:050505fd cdw11:fa050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.394238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.251 #22 NEW cov: 12430 ft: 15226 corp: 18/378b lim: 40 exec/s: 22 rss: 74Mb L: 29/37 MS: 1 ChangeBinInt- 00:06:30.251 [2024-09-30 21:50:14.433903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a01663d cdw11:e92656fc SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.433928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.251 #23 NEW cov: 12430 ft: 15319 corp: 19/388b lim: 40 exec/s: 23 rss: 74Mb L: 10/37 MS: 1 EraseBytes- 00:06:30.251 [2024-09-30 21:50:14.494172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05058001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.494197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.494255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:663de926 cdw11:56fc2c05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.494269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.251 #24 NEW cov: 12430 ft: 15329 corp: 20/407b lim: 40 exec/s: 24 rss: 74Mb L: 19/37 MS: 1 ChangeByte- 00:06:30.251 [2024-09-30 21:50:14.534639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:ffffff05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.534666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.534724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.534739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.534797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.534812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.534866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:050a0507 cdw11:0510100a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.534882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.251 #25 NEW cov: 12430 ft: 15345 corp: 21/439b lim: 40 exec/s: 25 rss: 74Mb L: 32/37 MS: 1 CrossOver- 00:06:30.251 [2024-09-30 21:50:14.594852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:ffffff05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.594881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.594939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.594953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.595009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.595023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.251 [2024-09-30 21:50:14.595078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:05050507 cdw11:0510100a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.251 [2024-09-30 21:50:14.595092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.251 #26 NEW cov: 12430 ft: 15364 corp: 22/471b lim: 40 exec/s: 26 rss: 74Mb L: 32/37 MS: 1 ShuffleBytes- 00:06:30.510 [2024-09-30 21:50:14.634752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.634777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.510 [2024-09-30 21:50:14.634836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.634850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.510 [2024-09-30 21:50:14.634906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:050505e9 cdw11:2656fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.634919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.510 #27 NEW cov: 12430 ft: 15409 corp: 23/500b lim: 40 exec/s: 27 rss: 74Mb L: 29/37 MS: 1 CrossOver- 00:06:30.510 [2024-09-30 21:50:14.674578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.674603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.510 #28 NEW cov: 12430 ft: 15435 corp: 24/513b lim: 40 exec/s: 28 rss: 74Mb L: 13/37 MS: 1 InsertByte- 00:06:30.510 [2024-09-30 21:50:14.734873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.734899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.510 [2024-09-30 21:50:14.734958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.734972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.510 #29 NEW cov: 12430 ft: 15444 corp: 25/536b lim: 40 exec/s: 29 rss: 74Mb L: 23/37 MS: 1 ShuffleBytes- 00:06:30.510 [2024-09-30 21:50:14.794882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:663de926 cdw11:56fc2c3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.794907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.510 [2024-09-30 21:50:14.855022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6b3de926 cdw11:56fc2c3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.510 [2024-09-30 21:50:14.855049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.510 #31 NEW cov: 12430 ft: 15519 corp: 26/544b lim: 40 exec/s: 31 rss: 74Mb L: 8/37 MS: 2 EraseBytes-ChangeBinInt- 00:06:30.768 [2024-09-30 21:50:14.895643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:ffffff05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.895669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.768 [2024-09-30 21:50:14.895726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000020 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.895742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.768 [2024-09-30 21:50:14.895798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.895811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.768 [2024-09-30 21:50:14.895868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:050a0507 cdw11:0510100a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.895882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.768 #32 NEW cov: 12430 ft: 15528 corp: 27/576b lim: 40 exec/s: 32 rss: 75Mb L: 32/37 MS: 1 ChangeBinInt- 00:06:30.768 [2024-09-30 21:50:14.955642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.955668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.768 [2024-09-30 21:50:14.955730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.955746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.768 [2024-09-30 21:50:14.955804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.768 [2024-09-30 21:50:14.955818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.768 #33 NEW cov: 12430 ft: 15532 corp: 28/601b lim: 40 exec/s: 33 rss: 75Mb L: 25/37 MS: 1 InsertRepeatedBytes- 00:06:30.769 [2024-09-30 21:50:15.015654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a1010 cdw11:10050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.015679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.015732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:01663de9 cdw11:2656fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.015749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.769 #34 NEW cov: 12430 ft: 15545 corp: 29/621b lim: 40 exec/s: 34 rss: 75Mb L: 20/37 MS: 1 ShuffleBytes- 00:06:30.769 [2024-09-30 21:50:15.056116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01663de9 cdw11:2656fc2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.056141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.056203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:3d000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.056217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.056271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.056285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.056342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.056356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.769 #35 NEW cov: 12430 ft: 15564 corp: 30/654b lim: 40 exec/s: 35 rss: 75Mb L: 33/37 MS: 1 PersAutoDict- DE: "\001f=\351&V\374,"- 00:06:30.769 [2024-09-30 21:50:15.116416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.116442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.116499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.116514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.116572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.116586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.116640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.116654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:30.769 [2024-09-30 21:50:15.116711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:10101010 cdw11:10101010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:30.769 [2024-09-30 21:50:15.116725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:31.028 #36 NEW cov: 12430 ft: 15635 corp: 31/694b lim: 40 exec/s: 36 rss: 75Mb L: 40/40 MS: 1 CopyPart- 00:06:31.028 [2024-09-30 21:50:15.175992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a0a05fc cdw11:2c051010 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.176017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.028 #37 NEW cov: 12430 ft: 15667 corp: 32/703b lim: 40 exec/s: 37 rss: 75Mb L: 9/40 MS: 1 EraseBytes- 00:06:31.028 [2024-09-30 21:50:15.216219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:6b3de926 cdw11:56fc6b3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.216244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.028 [2024-09-30 21:50:15.216301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e92656fc cdw11:2c3d2c3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.216320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.028 #38 NEW cov: 12430 ft: 15744 corp: 33/719b lim: 40 exec/s: 38 rss: 75Mb L: 16/40 MS: 1 CopyPart- 00:06:31.028 [2024-09-30 21:50:15.276675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a101010 cdw11:ffffff05 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.276700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.028 [2024-09-30 21:50:15.276752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000020 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.276766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.028 [2024-09-30 21:50:15.276838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:05050505 cdw11:05050505 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.276858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:31.028 [2024-09-30 21:50:15.276933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:050a0507 cdw11:0510100a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.028 [2024-09-30 21:50:15.276955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:31.028 #39 NEW cov: 12430 ft: 15752 corp: 34/751b lim: 40 exec/s: 19 rss: 75Mb L: 32/40 MS: 1 ShuffleBytes- 00:06:31.028 #39 DONE cov: 12430 ft: 15752 corp: 34/751b lim: 40 exec/s: 19 rss: 75Mb 00:06:31.028 ###### Recommended dictionary. ###### 00:06:31.028 "\001f=\351&V\374," # Uses: 2 00:06:31.028 ###### End of recommended dictionary. ###### 00:06:31.028 Done 39 runs in 2 second(s) 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:31.287 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:31.288 21:50:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:06:31.288 [2024-09-30 21:50:15.489252] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:31.288 [2024-09-30 21:50:15.489336] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049827 ] 00:06:31.546 [2024-09-30 21:50:15.666851] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.546 [2024-09-30 21:50:15.733046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.546 [2024-09-30 21:50:15.791980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.546 [2024-09-30 21:50:15.808356] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:06:31.546 INFO: Running with entropic power schedule (0xFF, 100). 00:06:31.546 INFO: Seed: 3246342290 00:06:31.546 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:31.546 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:31.546 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:06:31.546 INFO: A corpus is not provided, starting from an empty corpus 00:06:31.546 #2 INITED exec/s: 0 rss: 65Mb 00:06:31.546 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:31.546 This may also happen if the target rejected all inputs we tried so far 00:06:31.547 [2024-09-30 21:50:15.884953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.547 [2024-09-30 21:50:15.884989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:31.547 [2024-09-30 21:50:15.885105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.547 [2024-09-30 21:50:15.885121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:31.547 [2024-09-30 21:50:15.885235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:31.547 [2024-09-30 21:50:15.885252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.065 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:06:32.065 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:32.065 #17 NEW cov: 12182 ft: 12176 corp: 2/32b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 5 ShuffleBytes-InsertByte-CrossOver-CopyPart-InsertRepeatedBytes- 00:06:32.065 [2024-09-30 21:50:16.226075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.226117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.226245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffefffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.226265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.226383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.226401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.065 #18 NEW cov: 12313 ft: 12919 corp: 3/63b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 ChangeBit- 00:06:32.065 [2024-09-30 21:50:16.295888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.295918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.296055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.296073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.065 #19 NEW cov: 12319 ft: 13279 corp: 4/84b lim: 40 exec/s: 0 rss: 73Mb L: 21/31 MS: 1 EraseBytes- 00:06:32.065 [2024-09-30 21:50:16.356362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.356389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.356508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.356527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.356644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.356662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.065 #20 NEW cov: 12404 ft: 13605 corp: 5/114b lim: 40 exec/s: 0 rss: 73Mb L: 30/31 MS: 1 EraseBytes- 00:06:32.065 [2024-09-30 21:50:16.396956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.396984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.397119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.397136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.397264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.397281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.397411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.397426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.065 [2024-09-30 21:50:16.397548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2fffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.065 [2024-09-30 21:50:16.397564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.065 #21 NEW cov: 12404 ft: 14158 corp: 6/154b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:32.325 [2024-09-30 21:50:16.446286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.446319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.446455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffff26ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.446473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.325 #22 NEW cov: 12404 ft: 14228 corp: 7/175b lim: 40 exec/s: 0 rss: 73Mb L: 21/40 MS: 1 ChangeByte- 00:06:32.325 [2024-09-30 21:50:16.516520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.516548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.516658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.516675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.325 #23 NEW cov: 12404 ft: 14319 corp: 8/196b lim: 40 exec/s: 0 rss: 73Mb L: 21/40 MS: 1 CopyPart- 00:06:32.325 [2024-09-30 21:50:16.557558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.557586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.557714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.557734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.557862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.557880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.558003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.558021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.558145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ff2fffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.558163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.325 #24 NEW cov: 12404 ft: 14365 corp: 9/236b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:06:32.325 [2024-09-30 21:50:16.627682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.627710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.627840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.627857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.627972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.627988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.628111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.628128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.628240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2fffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.628259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.325 #25 NEW cov: 12404 ft: 14390 corp: 10/276b lim: 40 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 CopyPart- 00:06:32.325 [2024-09-30 21:50:16.676896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.676924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.325 [2024-09-30 21:50:16.677043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.325 [2024-09-30 21:50:16.677061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.585 #26 NEW cov: 12404 ft: 14484 corp: 11/295b lim: 40 exec/s: 0 rss: 73Mb L: 19/40 MS: 1 EraseBytes- 00:06:32.585 [2024-09-30 21:50:16.727179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0a32ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.727207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.585 [2024-09-30 21:50:16.727332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.727350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.585 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:32.585 #27 NEW cov: 12427 ft: 14551 corp: 12/317b lim: 40 exec/s: 0 rss: 73Mb L: 22/40 MS: 1 InsertByte- 00:06:32.585 [2024-09-30 21:50:16.777291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:0a0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.777323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.585 [2024-09-30 21:50:16.777434] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.777450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.585 #28 NEW cov: 12427 ft: 14570 corp: 13/339b lim: 40 exec/s: 0 rss: 74Mb L: 22/40 MS: 1 InsertByte- 00:06:32.585 [2024-09-30 21:50:16.847518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:0a0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.847547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.585 [2024-09-30 21:50:16.847670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.847688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.585 #29 NEW cov: 12427 ft: 14600 corp: 14/361b lim: 40 exec/s: 29 rss: 74Mb L: 22/40 MS: 1 ShuffleBytes- 00:06:32.585 [2024-09-30 21:50:16.917703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0afff7ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.917731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.585 [2024-09-30 21:50:16.917863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.585 [2024-09-30 21:50:16.917881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.585 #30 NEW cov: 12427 ft: 14614 corp: 15/382b lim: 40 exec/s: 30 rss: 74Mb L: 21/40 MS: 1 ChangeBit- 00:06:32.844 [2024-09-30 21:50:16.968297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:0a0aff27 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:16.968331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:16.968464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:27272727 cdw11:27272727 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:16.968484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:16.968613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:27ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:16.968629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:16.968741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:16.968758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.845 #31 NEW cov: 12427 ft: 14657 corp: 16/414b lim: 40 exec/s: 31 rss: 74Mb L: 32/40 MS: 1 InsertRepeatedBytes- 00:06:32.845 [2024-09-30 21:50:17.018858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.018885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.019010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.019029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.019154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff01ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.019173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.019293] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.019311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.019447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2fffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.019463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.845 #32 NEW cov: 12427 ft: 14670 corp: 17/454b lim: 40 exec/s: 32 rss: 74Mb L: 40/40 MS: 1 ChangeBinInt- 00:06:32.845 [2024-09-30 21:50:17.089074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01663deb cdw11:0b8cbba2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.089101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.089198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.089213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.089323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.089343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.089458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.089475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.089601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2fffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.089619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:32.845 #33 NEW cov: 12427 ft: 14762 corp: 18/494b lim: 40 exec/s: 33 rss: 74Mb L: 40/40 MS: 1 CMP- DE: "\001f=\353\013\214\273\242"- 00:06:32.845 [2024-09-30 21:50:17.128588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.128616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.128743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.128760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.128880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ff2f2f2f cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.128898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.845 #34 NEW cov: 12427 ft: 14782 corp: 19/523b lim: 40 exec/s: 34 rss: 74Mb L: 29/40 MS: 1 EraseBytes- 00:06:32.845 [2024-09-30 21:50:17.179248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01663deb cdw11:0b8cbba2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.179274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.179405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.179421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.179548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.179565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.179701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.179718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:32.845 [2024-09-30 21:50:17.179843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2fffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:32.845 [2024-09-30 21:50:17.179860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.105 #35 NEW cov: 12427 ft: 14797 corp: 20/563b lim: 40 exec/s: 35 rss: 74Mb L: 40/40 MS: 1 CopyPart- 00:06:33.105 [2024-09-30 21:50:17.249023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:0a0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.249050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.249174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.249192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.249303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.249323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.105 #36 NEW cov: 12427 ft: 14816 corp: 21/589b lim: 40 exec/s: 36 rss: 74Mb L: 26/40 MS: 1 CMP- DE: "\000\000\000\010"- 00:06:33.105 [2024-09-30 21:50:17.299448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.299477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.299598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.299615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.299728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.299746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.299867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.299886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.105 #37 NEW cov: 12427 ft: 14854 corp: 22/623b lim: 40 exec/s: 37 rss: 74Mb L: 34/40 MS: 1 CrossOver- 00:06:33.105 [2024-09-30 21:50:17.369096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0a32ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.369124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.369241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.369258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.105 #38 NEW cov: 12427 ft: 14891 corp: 23/645b lim: 40 exec/s: 38 rss: 74Mb L: 22/40 MS: 1 CopyPart- 00:06:33.105 [2024-09-30 21:50:17.440154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:01663deb cdw11:0b8cbb4b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.440180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.440298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.440319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.440443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.440458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.440574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:2f2f2f2f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.440591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.105 [2024-09-30 21:50:17.440708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:2f2f2f2f cdw11:2fffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.105 [2024-09-30 21:50:17.440725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.105 #39 NEW cov: 12427 ft: 14910 corp: 24/685b lim: 40 exec/s: 39 rss: 74Mb L: 40/40 MS: 1 ChangeByte- 00:06:33.364 [2024-09-30 21:50:17.490244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:0a505050 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.364 [2024-09-30 21:50:17.490271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.364 [2024-09-30 21:50:17.490390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:50505050 cdw11:50505050 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.364 [2024-09-30 21:50:17.490406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.364 [2024-09-30 21:50:17.490524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:50505050 cdw11:5050500a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.364 [2024-09-30 21:50:17.490540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.364 [2024-09-30 21:50:17.490657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.364 [2024-09-30 21:50:17.490674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.364 [2024-09-30 21:50:17.490789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.490807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.365 #40 NEW cov: 12427 ft: 14937 corp: 25/725b lim: 40 exec/s: 40 rss: 74Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:33.365 [2024-09-30 21:50:17.560237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0a0a cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.560264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.560392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.560409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.560527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.560541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.560667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff2b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.560684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.365 #41 NEW cov: 12427 ft: 14961 corp: 26/759b lim: 40 exec/s: 41 rss: 74Mb L: 34/40 MS: 1 ChangeByte- 00:06:33.365 [2024-09-30 21:50:17.630065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.630094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.630212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a0affff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.630228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.630331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.630350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.365 #42 NEW cov: 12427 ft: 15021 corp: 27/789b lim: 40 exec/s: 42 rss: 74Mb L: 30/40 MS: 1 InsertRepeatedBytes- 00:06:33.365 [2024-09-30 21:50:17.680580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd410a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.680608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.680744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0a0a0a0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.680765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.680889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:0a0affff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.680906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.365 [2024-09-30 21:50:17.681028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.365 [2024-09-30 21:50:17.681047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.365 #43 NEW cov: 12427 ft: 15032 corp: 28/821b lim: 40 exec/s: 43 rss: 74Mb L: 32/40 MS: 1 CrossOver- 00:06:33.625 [2024-09-30 21:50:17.750521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a8a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.750549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.750680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffefffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.750698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.750818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.750835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.625 #44 NEW cov: 12427 ft: 15045 corp: 29/852b lim: 40 exec/s: 44 rss: 74Mb L: 31/40 MS: 1 ChangeBit- 00:06:33.625 [2024-09-30 21:50:17.800659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.800686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.800809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffefffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.800826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.800939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.800956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.625 #45 NEW cov: 12427 ft: 15052 corp: 30/883b lim: 40 exec/s: 45 rss: 74Mb L: 31/40 MS: 1 ShuffleBytes- 00:06:33.625 [2024-09-30 21:50:17.851354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:fd0a0a0a cdw11:0affffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.851381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.851509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffefffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.851527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.851642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.851660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.851787] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.851803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:33.625 [2024-09-30 21:50:17.851932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:33.625 [2024-09-30 21:50:17.851948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:33.625 #46 NEW cov: 12427 ft: 15062 corp: 31/923b lim: 40 exec/s: 23 rss: 75Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:06:33.625 #46 DONE cov: 12427 ft: 15062 corp: 31/923b lim: 40 exec/s: 23 rss: 75Mb 00:06:33.625 ###### Recommended dictionary. ###### 00:06:33.625 "\001f=\353\013\214\273\242" # Uses: 0 00:06:33.625 "\000\000\000\010" # Uses: 0 00:06:33.625 ###### End of recommended dictionary. ###### 00:06:33.625 Done 46 runs in 2 second(s) 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:33.885 21:50:18 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:06:33.885 [2024-09-30 21:50:18.058350] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:33.885 [2024-09-30 21:50:18.058426] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050135 ] 00:06:33.885 [2024-09-30 21:50:18.239581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.145 [2024-09-30 21:50:18.306208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.145 [2024-09-30 21:50:18.365142] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.145 [2024-09-30 21:50:18.381493] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:06:34.145 INFO: Running with entropic power schedule (0xFF, 100). 00:06:34.145 INFO: Seed: 1524374356 00:06:34.145 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:34.145 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:34.145 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:06:34.145 INFO: A corpus is not provided, starting from an empty corpus 00:06:34.145 #2 INITED exec/s: 0 rss: 65Mb 00:06:34.145 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:34.145 This may also happen if the target rejected all inputs we tried so far 00:06:34.145 [2024-09-30 21:50:18.437192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a570a1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.145 [2024-09-30 21:50:18.437222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.145 [2024-09-30 21:50:18.437297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.145 [2024-09-30 21:50:18.437317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.145 [2024-09-30 21:50:18.437372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.145 [2024-09-30 21:50:18.437384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.145 [2024-09-30 21:50:18.437441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.145 [2024-09-30 21:50:18.437454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.404 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:06:34.404 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:34.404 #7 NEW cov: 12189 ft: 12184 corp: 2/39b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 5 CopyPart-CrossOver-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:06:34.404 [2024-09-30 21:50:18.767858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e0e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.404 [2024-09-30 21:50:18.767896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.663 #9 NEW cov: 12302 ft: 13490 corp: 3/51b lim: 40 exec/s: 0 rss: 73Mb L: 12/38 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:34.663 [2024-09-30 21:50:18.807799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e1c cdw11:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.663 [2024-09-30 21:50:18.807827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.663 #10 NEW cov: 12308 ft: 13763 corp: 4/63b lim: 40 exec/s: 0 rss: 73Mb L: 12/38 MS: 1 CMP- DE: "\034\000\000\000"- 00:06:34.663 [2024-09-30 21:50:18.868111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:000e0e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.663 [2024-09-30 21:50:18.868136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.663 [2024-09-30 21:50:18.868198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.663 [2024-09-30 21:50:18.868213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.663 #11 NEW cov: 12393 ft: 14180 corp: 5/79b lim: 40 exec/s: 0 rss: 73Mb L: 16/38 MS: 1 PersAutoDict- DE: "\034\000\000\000"- 00:06:34.663 [2024-09-30 21:50:18.928144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e1c cdw11:0c000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.663 [2024-09-30 21:50:18.928170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.663 #12 NEW cov: 12393 ft: 14304 corp: 6/91b lim: 40 exec/s: 0 rss: 73Mb L: 12/38 MS: 1 ChangeBinInt- 00:06:34.663 [2024-09-30 21:50:18.968263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0e0e0e0e cdw11:0e0a0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.663 [2024-09-30 21:50:18.968289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.663 #13 NEW cov: 12393 ft: 14458 corp: 7/103b lim: 40 exec/s: 0 rss: 73Mb L: 12/38 MS: 1 ShuffleBytes- 00:06:34.663 [2024-09-30 21:50:19.008344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:30fd0e0e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.663 [2024-09-30 21:50:19.008369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.663 #16 NEW cov: 12393 ft: 14573 corp: 8/112b lim: 40 exec/s: 0 rss: 73Mb L: 9/38 MS: 3 ChangeByte-InsertByte-CrossOver- 00:06:34.922 [2024-09-30 21:50:19.048643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e0e cdw11:0e0a0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.048669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.048723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0e0e0e0e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.048737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.922 #17 NEW cov: 12393 ft: 14709 corp: 9/135b lim: 40 exec/s: 0 rss: 73Mb L: 23/38 MS: 1 CopyPart- 00:06:34.922 [2024-09-30 21:50:19.089016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a570a1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.089042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.089103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.089117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.089171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.089184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.089241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.089254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.922 #23 NEW cov: 12393 ft: 14740 corp: 10/173b lim: 40 exec/s: 0 rss: 73Mb L: 38/38 MS: 1 ShuffleBytes- 00:06:34.922 [2024-09-30 21:50:19.148944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:001c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.148969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.149031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:000e0e1c cdw11:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.149045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.922 #29 NEW cov: 12393 ft: 14771 corp: 11/193b lim: 40 exec/s: 0 rss: 73Mb L: 20/38 MS: 1 PersAutoDict- DE: "\034\000\000\000"- 00:06:34.922 [2024-09-30 21:50:19.209358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a570a1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.209384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.209447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.209465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.209524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1c1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.209538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:34.922 [2024-09-30 21:50:19.209598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1c1c1c1c cdw11:1c1c1d1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.209611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:34.922 #30 NEW cov: 12393 ft: 14792 corp: 12/231b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ChangeBit- 00:06:34.922 [2024-09-30 21:50:19.249049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e1c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:34.922 [2024-09-30 21:50:19.249074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 #31 NEW cov: 12393 ft: 14849 corp: 13/243b lim: 40 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 PersAutoDict- DE: "\034\000\000\000"- 00:06:35.189 [2024-09-30 21:50:19.309396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:00150e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.309422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 [2024-09-30 21:50:19.309478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.309498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.189 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:35.189 #32 NEW cov: 12416 ft: 14889 corp: 14/259b lim: 40 exec/s: 0 rss: 74Mb L: 16/38 MS: 1 ChangeBinInt- 00:06:35.189 [2024-09-30 21:50:19.349492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:000e0e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.349519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 [2024-09-30 21:50:19.349579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00001000 cdw11:00000e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.349596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.189 #33 NEW cov: 12416 ft: 14909 corp: 15/275b lim: 40 exec/s: 0 rss: 74Mb L: 16/38 MS: 1 ChangeBinInt- 00:06:35.189 [2024-09-30 21:50:19.389453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0af2f1f1 cdw11:f1f1f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.389479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 #34 NEW cov: 12416 ft: 14936 corp: 16/287b lim: 40 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 ChangeBinInt- 00:06:35.189 [2024-09-30 21:50:19.429575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001c00 cdw11:0e00000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.429600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 #35 NEW cov: 12416 ft: 14955 corp: 17/299b lim: 40 exec/s: 35 rss: 74Mb L: 12/38 MS: 1 ShuffleBytes- 00:06:35.189 [2024-09-30 21:50:19.489754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0af2f1f1 cdw11:f1e70e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.489779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 #36 NEW cov: 12416 ft: 14991 corp: 18/308b lim: 40 exec/s: 36 rss: 74Mb L: 9/38 MS: 1 EraseBytes- 00:06:35.189 [2024-09-30 21:50:19.550114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e1c cdw11:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.550142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.189 [2024-09-30 21:50:19.550204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1c000000 cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.189 [2024-09-30 21:50:19.550219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.449 #37 NEW cov: 12416 ft: 15020 corp: 19/329b lim: 40 exec/s: 37 rss: 74Mb L: 21/38 MS: 1 CopyPart- 00:06:35.449 [2024-09-30 21:50:19.590036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001c00 cdw11:0e00000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.590063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.449 #38 NEW cov: 12416 ft: 15039 corp: 20/341b lim: 40 exec/s: 38 rss: 74Mb L: 12/38 MS: 1 ChangeByte- 00:06:35.449 [2024-09-30 21:50:19.650387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e1c cdw11:0000000e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.650414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.449 [2024-09-30 21:50:19.650474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1c000000 cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.650489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.449 #39 NEW cov: 12416 ft: 15055 corp: 21/362b lim: 40 exec/s: 39 rss: 74Mb L: 21/38 MS: 1 ChangeByte- 00:06:35.449 [2024-09-30 21:50:19.710429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a001c00 cdw11:00000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.710456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.449 #40 NEW cov: 12416 ft: 15083 corp: 22/374b lim: 40 exec/s: 40 rss: 74Mb L: 12/38 MS: 1 ShuffleBytes- 00:06:35.449 [2024-09-30 21:50:19.770571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:0e0e1c0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.770597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.449 #41 NEW cov: 12416 ft: 15091 corp: 23/389b lim: 40 exec/s: 41 rss: 74Mb L: 15/38 MS: 1 CopyPart- 00:06:35.449 [2024-09-30 21:50:19.810790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:000e0e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.810817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.449 [2024-09-30 21:50:19.810880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.449 [2024-09-30 21:50:19.810901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.708 #42 NEW cov: 12416 ft: 15124 corp: 24/405b lim: 40 exec/s: 42 rss: 74Mb L: 16/38 MS: 1 ChangeByte- 00:06:35.708 [2024-09-30 21:50:19.850858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:00150e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:19.850884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.708 [2024-09-30 21:50:19.850941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:4000000e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:19.850961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.708 #43 NEW cov: 12416 ft: 15155 corp: 25/421b lim: 40 exec/s: 43 rss: 74Mb L: 16/38 MS: 1 ChangeBit- 00:06:35.708 [2024-09-30 21:50:19.910939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:0e1c0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:19.910967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.708 #44 NEW cov: 12416 ft: 15167 corp: 26/435b lim: 40 exec/s: 44 rss: 74Mb L: 14/38 MS: 1 EraseBytes- 00:06:35.708 [2024-09-30 21:50:19.951167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:000e0e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:19.951195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.708 [2024-09-30 21:50:19.951261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000026 cdw11:0e0e0e5e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:19.951277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.708 #45 NEW cov: 12416 ft: 15237 corp: 27/451b lim: 40 exec/s: 45 rss: 74Mb L: 16/38 MS: 1 ChangeByte- 00:06:35.708 [2024-09-30 21:50:20.011688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0e0e1c00 cdw11:00000e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:20.011715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.708 [2024-09-30 21:50:20.011777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:20.011793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.708 [2024-09-30 21:50:20.011857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0ec20e0e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:20.011872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.708 [2024-09-30 21:50:20.011934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0e0a0e0e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:20.011949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:35.708 #46 NEW cov: 12416 ft: 15248 corp: 28/483b lim: 40 exec/s: 46 rss: 75Mb L: 32/38 MS: 1 CrossOver- 00:06:35.708 [2024-09-30 21:50:20.071582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c0000 cdw11:00150000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:20.071611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.708 [2024-09-30 21:50:20.071679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.708 [2024-09-30 21:50:20.071693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.967 #47 NEW cov: 12416 ft: 15293 corp: 29/499b lim: 40 exec/s: 47 rss: 75Mb L: 16/38 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:06:35.967 [2024-09-30 21:50:20.111502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:002b8be1 cdw11:ceec3d66 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.111531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.967 #49 NEW cov: 12416 ft: 15305 corp: 30/510b lim: 40 exec/s: 49 rss: 75Mb L: 11/38 MS: 2 CrossOver-CMP- DE: "+\213\341\316\354=f\000"- 00:06:35.967 [2024-09-30 21:50:20.172019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a1c000e cdw11:0e0a0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.172046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.967 [2024-09-30 21:50:20.172109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0e0e0e00 cdw11:000e0e1c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.172124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:35.967 [2024-09-30 21:50:20.172184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:0000000e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.172198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:35.967 #50 NEW cov: 12416 ft: 15503 corp: 31/534b lim: 40 exec/s: 50 rss: 75Mb L: 24/38 MS: 1 CrossOver- 00:06:35.967 [2024-09-30 21:50:20.211812] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0000 cdw11:1c000e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.211839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.967 #51 NEW cov: 12416 ft: 15520 corp: 32/548b lim: 40 exec/s: 51 rss: 75Mb L: 14/38 MS: 1 CopyPart- 00:06:35.967 [2024-09-30 21:50:20.251936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0af2f1f1 cdw11:f1e7f1f1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.251962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.967 #52 NEW cov: 12416 ft: 15546 corp: 33/557b lim: 40 exec/s: 52 rss: 75Mb L: 9/38 MS: 1 CopyPart- 00:06:35.967 [2024-09-30 21:50:20.312232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0e1c cdw11:00003700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.312257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:35.967 [2024-09-30 21:50:20.312320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0e1c0000 cdw11:000e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.967 [2024-09-30 21:50:20.312335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.226 #53 NEW cov: 12416 ft: 15683 corp: 34/579b lim: 40 exec/s: 53 rss: 75Mb L: 22/38 MS: 1 InsertByte- 00:06:36.226 [2024-09-30 21:50:20.352327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a0e0eff cdw11:ffffff0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.226 [2024-09-30 21:50:20.352358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.226 [2024-09-30 21:50:20.352422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0e0e0e0e cdw11:0e0e0e0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.226 [2024-09-30 21:50:20.352437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:36.226 #54 NEW cov: 12416 ft: 15701 corp: 35/595b lim: 40 exec/s: 54 rss: 75Mb L: 16/38 MS: 1 InsertRepeatedBytes- 00:06:36.226 [2024-09-30 21:50:20.392327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0af1f1f1 cdw11:f1f1f1f2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:36.226 [2024-09-30 21:50:20.392352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.226 #55 NEW cov: 12416 ft: 15713 corp: 36/607b lim: 40 exec/s: 27 rss: 75Mb L: 12/38 MS: 1 ShuffleBytes- 00:06:36.226 #55 DONE cov: 12416 ft: 15713 corp: 36/607b lim: 40 exec/s: 27 rss: 75Mb 00:06:36.226 ###### Recommended dictionary. ###### 00:06:36.226 "\034\000\000\000" # Uses: 3 00:06:36.226 "\000\000\000\000\000\000\000\000" # Uses: 0 00:06:36.226 "+\213\341\316\354=f\000" # Uses: 0 00:06:36.226 ###### End of recommended dictionary. ###### 00:06:36.226 Done 55 runs in 2 second(s) 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:06:36.226 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:36.227 21:50:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:06:36.227 [2024-09-30 21:50:20.585936] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:36.227 [2024-09-30 21:50:20.586014] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050646 ] 00:06:36.486 [2024-09-30 21:50:20.763278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.486 [2024-09-30 21:50:20.829786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.745 [2024-09-30 21:50:20.888997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.745 [2024-09-30 21:50:20.905377] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:06:36.745 INFO: Running with entropic power schedule (0xFF, 100). 00:06:36.745 INFO: Seed: 4047369326 00:06:36.745 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:36.745 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:36.745 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:06:36.745 INFO: A corpus is not provided, starting from an empty corpus 00:06:36.745 #2 INITED exec/s: 0 rss: 65Mb 00:06:36.745 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:36.745 This may also happen if the target rejected all inputs we tried so far 00:06:36.745 [2024-09-30 21:50:20.950875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.745 [2024-09-30 21:50:20.950906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:36.745 [2024-09-30 21:50:20.950980] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:36.745 [2024-09-30 21:50:20.950999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.004 NEW_FUNC[1/715]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:06:37.004 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:37.004 #4 NEW cov: 12183 ft: 12177 corp: 2/15b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 2 ChangeByte-InsertRepeatedBytes- 00:06:37.004 [2024-09-30 21:50:21.281765] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.004 [2024-09-30 21:50:21.281800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.004 [2024-09-30 21:50:21.281870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:00000009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.004 [2024-09-30 21:50:21.281888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.004 NEW_FUNC[1/1]: 0x4705d8 in feat_interrupt_vector_configuration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:332 00:06:37.004 #5 NEW cov: 12319 ft: 12907 corp: 3/29b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 ChangeBinInt- 00:06:37.004 [2024-09-30 21:50:21.341838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.004 [2024-09-30 21:50:21.341867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.004 [2024-09-30 21:50:21.341937] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:00000009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.004 [2024-09-30 21:50:21.341957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.263 #11 NEW cov: 12325 ft: 13125 corp: 4/43b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 CMP- DE: "\003\000"- 00:06:37.263 [2024-09-30 21:50:21.402008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.402035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.402112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.402133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.263 #12 NEW cov: 12410 ft: 13400 corp: 5/57b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 1 ShuffleBytes- 00:06:37.263 [2024-09-30 21:50:21.442112] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.442140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.442212] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.442233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.263 #13 NEW cov: 12410 ft: 13479 corp: 6/72b lim: 35 exec/s: 0 rss: 73Mb L: 15/15 MS: 1 InsertByte- 00:06:37.263 [2024-09-30 21:50:21.482190] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.482217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.482293] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.482318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.263 #14 NEW cov: 12410 ft: 13582 corp: 7/86b lim: 35 exec/s: 0 rss: 73Mb L: 14/15 MS: 1 ShuffleBytes- 00:06:37.263 [2024-09-30 21:50:21.542558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.542585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.542656] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.542676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.542765] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.542781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.263 #15 NEW cov: 12410 ft: 13845 corp: 8/113b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 CopyPart- 00:06:37.263 [2024-09-30 21:50:21.602742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.602770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.602842] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.602861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.263 [2024-09-30 21:50:21.602936] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.263 [2024-09-30 21:50:21.602956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.522 #16 NEW cov: 12410 ft: 13886 corp: 9/140b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 ShuffleBytes- 00:06:37.522 [2024-09-30 21:50:21.662897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.522 [2024-09-30 21:50:21.662931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.522 [2024-09-30 21:50:21.663004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.522 [2024-09-30 21:50:21.663025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.522 [2024-09-30 21:50:21.663091] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:6 cdw10:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.522 [2024-09-30 21:50:21.663107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.522 NEW_FUNC[1/1]: 0x46c498 in feat_lba_range_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:289 00:06:37.522 #17 NEW cov: 12421 ft: 13919 corp: 10/167b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 1 CMP- DE: "\377\377\377\003"- 00:06:37.522 [2024-09-30 21:50:21.702952] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.522 [2024-09-30 21:50:21.702982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.522 [2024-09-30 21:50:21.703057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.522 [2024-09-30 21:50:21.703081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.522 [2024-09-30 21:50:21.703149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000a9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.522 [2024-09-30 21:50:21.703169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.522 #19 NEW cov: 12428 ft: 14021 corp: 11/194b lim: 35 exec/s: 0 rss: 73Mb L: 27/27 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:37.522 [2024-09-30 21:50:21.742900] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.742927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.523 [2024-09-30 21:50:21.743000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.743021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.523 #20 NEW cov: 12428 ft: 14024 corp: 12/208b lim: 35 exec/s: 0 rss: 74Mb L: 14/27 MS: 1 ChangeBit- 00:06:37.523 [2024-09-30 21:50:21.803085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.803112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.523 [2024-09-30 21:50:21.803184] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.803204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.523 #21 NEW cov: 12428 ft: 14056 corp: 13/223b lim: 35 exec/s: 0 rss: 74Mb L: 15/27 MS: 1 ChangeBit- 00:06:37.523 [2024-09-30 21:50:21.843037] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:4 cdw10:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.843065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.523 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:37.523 #25 NEW cov: 12451 ft: 14789 corp: 14/230b lim: 35 exec/s: 0 rss: 74Mb L: 7/27 MS: 4 CopyPart-ChangeBit-ChangeByte-CrossOver- 00:06:37.523 [2024-09-30 21:50:21.883302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.883333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.523 [2024-09-30 21:50:21.883406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.523 [2024-09-30 21:50:21.883427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.782 #26 NEW cov: 12451 ft: 14893 corp: 15/244b lim: 35 exec/s: 0 rss: 74Mb L: 14/27 MS: 1 PersAutoDict- DE: "\377\377\377\003"- 00:06:37.782 [2024-09-30 21:50:21.923440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:21.923467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.782 [2024-09-30 21:50:21.923543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:21.923565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.782 #27 NEW cov: 12451 ft: 14914 corp: 16/258b lim: 35 exec/s: 27 rss: 74Mb L: 14/27 MS: 1 CopyPart- 00:06:37.782 [2024-09-30 21:50:21.963544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:21.963571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.782 [2024-09-30 21:50:21.963644] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:21.963668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.782 #28 NEW cov: 12451 ft: 14933 corp: 17/272b lim: 35 exec/s: 28 rss: 74Mb L: 14/27 MS: 1 CrossOver- 00:06:37.782 [2024-09-30 21:50:22.023711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.023738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.782 [2024-09-30 21:50:22.023809] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES INTERRUPT VECTOR CONFIGURATION cid:5 cdw10:00000009 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.023830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.782 #34 NEW cov: 12451 ft: 14945 corp: 18/286b lim: 35 exec/s: 34 rss: 74Mb L: 14/27 MS: 1 CMP- DE: "\015\000\000\000"- 00:06:37.782 [2024-09-30 21:50:22.063819] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.063846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.782 [2024-09-30 21:50:22.063917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.063936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.782 #35 NEW cov: 12451 ft: 14951 corp: 19/301b lim: 35 exec/s: 35 rss: 74Mb L: 15/27 MS: 1 ChangeByte- 00:06:37.782 [2024-09-30 21:50:22.104090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.104121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:37.782 [2024-09-30 21:50:22.104193] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.104217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:37.782 [2024-09-30 21:50:22.104289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:37.782 [2024-09-30 21:50:22.104312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:37.782 #36 NEW cov: 12451 ft: 14989 corp: 20/323b lim: 35 exec/s: 36 rss: 74Mb L: 22/27 MS: 1 EraseBytes- 00:06:38.041 [2024-09-30 21:50:22.164104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.164131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.041 [2024-09-30 21:50:22.164204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.164223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.041 #37 NEW cov: 12451 ft: 15027 corp: 21/343b lim: 35 exec/s: 37 rss: 74Mb L: 20/27 MS: 1 EraseBytes- 00:06:38.041 [2024-09-30 21:50:22.224439] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.224465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.041 [2024-09-30 21:50:22.224537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.224558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.041 [2024-09-30 21:50:22.224629] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.224645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.041 #38 NEW cov: 12451 ft: 15056 corp: 22/367b lim: 35 exec/s: 38 rss: 74Mb L: 24/27 MS: 1 InsertRepeatedBytes- 00:06:38.041 [2024-09-30 21:50:22.284472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.284498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.041 [2024-09-30 21:50:22.284572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.041 [2024-09-30 21:50:22.284591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.041 #39 NEW cov: 12451 ft: 15073 corp: 23/381b lim: 35 exec/s: 39 rss: 74Mb L: 14/27 MS: 1 ChangeByte- 00:06:38.042 [2024-09-30 21:50:22.344474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.042 [2024-09-30 21:50:22.344500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.042 #40 NEW cov: 12451 ft: 15111 corp: 24/391b lim: 35 exec/s: 40 rss: 74Mb L: 10/27 MS: 1 EraseBytes- 00:06:38.042 [2024-09-30 21:50:22.384745] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.042 [2024-09-30 21:50:22.384778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.042 [2024-09-30 21:50:22.384851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.042 [2024-09-30 21:50:22.384871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.042 #41 NEW cov: 12451 ft: 15114 corp: 25/405b lim: 35 exec/s: 41 rss: 74Mb L: 14/27 MS: 1 ChangeByte- 00:06:38.301 [2024-09-30 21:50:22.424991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.301 [2024-09-30 21:50:22.425017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.301 [2024-09-30 21:50:22.425090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.301 [2024-09-30 21:50:22.425110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.301 [2024-09-30 21:50:22.425180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.301 [2024-09-30 21:50:22.425197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.301 #42 NEW cov: 12451 ft: 15151 corp: 26/432b lim: 35 exec/s: 42 rss: 74Mb L: 27/27 MS: 1 CrossOver- 00:06:38.301 [2024-09-30 21:50:22.464972] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.301 [2024-09-30 21:50:22.464999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.302 [2024-09-30 21:50:22.465071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.465093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.302 #43 NEW cov: 12451 ft: 15155 corp: 27/447b lim: 35 exec/s: 43 rss: 74Mb L: 15/27 MS: 1 ChangeByte- 00:06:38.302 [2024-09-30 21:50:22.525167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.525194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.302 [2024-09-30 21:50:22.525267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.525288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.302 #44 NEW cov: 12451 ft: 15165 corp: 28/461b lim: 35 exec/s: 44 rss: 74Mb L: 14/27 MS: 1 ShuffleBytes- 00:06:38.302 [2024-09-30 21:50:22.565244] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.565271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.302 [2024-09-30 21:50:22.565352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.565374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.302 #45 NEW cov: 12451 ft: 15189 corp: 29/476b lim: 35 exec/s: 45 rss: 74Mb L: 15/27 MS: 1 CopyPart- 00:06:38.302 [2024-09-30 21:50:22.625621] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.625648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.302 [2024-09-30 21:50:22.625723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.625743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.302 [2024-09-30 21:50:22.625818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000019 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.302 [2024-09-30 21:50:22.625840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.302 #48 NEW cov: 12451 ft: 15193 corp: 30/502b lim: 35 exec/s: 48 rss: 75Mb L: 26/27 MS: 3 EraseBytes-EraseBytes-InsertRepeatedBytes- 00:06:38.561 [2024-09-30 21:50:22.685449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.685479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.561 #49 NEW cov: 12451 ft: 15205 corp: 31/513b lim: 35 exec/s: 49 rss: 75Mb L: 11/27 MS: 1 EraseBytes- 00:06:38.561 [2024-09-30 21:50:22.745762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.745789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.745863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.745884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.561 #50 NEW cov: 12451 ft: 15224 corp: 32/527b lim: 35 exec/s: 50 rss: 75Mb L: 14/27 MS: 1 ChangeBinInt- 00:06:38.561 [2024-09-30 21:50:22.806003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.806031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.806106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.806128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.561 #51 NEW cov: 12451 ft: 15236 corp: 33/542b lim: 35 exec/s: 51 rss: 75Mb L: 15/27 MS: 1 ShuffleBytes- 00:06:38.561 [2024-09-30 21:50:22.866468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.866495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.866569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.866590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.866657] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:6 cdw10:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.866673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.866744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.866760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:38.561 #52 NEW cov: 12451 ft: 15528 corp: 34/570b lim: 35 exec/s: 52 rss: 75Mb L: 28/28 MS: 1 InsertByte- 00:06:38.561 [2024-09-30 21:50:22.906393] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.906421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.906493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.906512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:38.561 [2024-09-30 21:50:22.906585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:38.561 [2024-09-30 21:50:22.906602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:38.820 #53 NEW cov: 12451 ft: 15536 corp: 35/596b lim: 35 exec/s: 26 rss: 75Mb L: 26/28 MS: 1 InsertRepeatedBytes- 00:06:38.821 #53 DONE cov: 12451 ft: 15536 corp: 35/596b lim: 35 exec/s: 26 rss: 75Mb 00:06:38.821 ###### Recommended dictionary. ###### 00:06:38.821 "\003\000" # Uses: 0 00:06:38.821 "\377\377\377\003" # Uses: 1 00:06:38.821 "\015\000\000\000" # Uses: 0 00:06:38.821 ###### End of recommended dictionary. ###### 00:06:38.821 Done 53 runs in 2 second(s) 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:38.821 21:50:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:06:38.821 [2024-09-30 21:50:23.119581] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:38.821 [2024-09-30 21:50:23.119657] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051174 ] 00:06:39.080 [2024-09-30 21:50:23.298706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.080 [2024-09-30 21:50:23.364116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.080 [2024-09-30 21:50:23.422880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.080 [2024-09-30 21:50:23.439258] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:06:39.338 INFO: Running with entropic power schedule (0xFF, 100). 00:06:39.339 INFO: Seed: 2285403561 00:06:39.339 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:39.339 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:39.339 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:06:39.339 INFO: A corpus is not provided, starting from an empty corpus 00:06:39.339 #2 INITED exec/s: 0 rss: 65Mb 00:06:39.339 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:39.339 This may also happen if the target rejected all inputs we tried so far 00:06:39.339 [2024-09-30 21:50:23.487844] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.339 [2024-09-30 21:50:23.487872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.598 NEW_FUNC[1/714]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:06:39.598 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:39.598 #5 NEW cov: 12171 ft: 12160 corp: 2/14b lim: 35 exec/s: 0 rss: 73Mb L: 13/13 MS: 3 ShuffleBytes-CrossOver-InsertRepeatedBytes- 00:06:39.598 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:06:39.598 #7 NEW cov: 12298 ft: 12826 corp: 3/21b lim: 35 exec/s: 0 rss: 73Mb L: 7/13 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:06:39.598 [2024-09-30 21:50:23.858722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.598 [2024-09-30 21:50:23.858756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.598 #11 NEW cov: 12304 ft: 13110 corp: 4/29b lim: 35 exec/s: 0 rss: 73Mb L: 8/13 MS: 4 EraseBytes-CrossOver-ChangeByte-CopyPart- 00:06:39.598 [2024-09-30 21:50:23.919465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.598 [2024-09-30 21:50:23.919493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.598 [2024-09-30 21:50:23.919551] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.598 [2024-09-30 21:50:23.919566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.598 [2024-09-30 21:50:23.919620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.598 [2024-09-30 21:50:23.919634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.598 [2024-09-30 21:50:23.919690] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.598 [2024-09-30 21:50:23.919715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:39.598 #13 NEW cov: 12389 ft: 13936 corp: 5/64b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:39.598 [2024-09-30 21:50:23.958960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.598 [2024-09-30 21:50:23.958986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.857 #14 NEW cov: 12389 ft: 14014 corp: 6/77b lim: 35 exec/s: 0 rss: 73Mb L: 13/35 MS: 1 ChangeByte- 00:06:39.857 [2024-09-30 21:50:24.019160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.019187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.857 #15 NEW cov: 12389 ft: 14127 corp: 7/90b lim: 35 exec/s: 0 rss: 73Mb L: 13/35 MS: 1 CopyPart- 00:06:39.857 [2024-09-30 21:50:24.059516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.059541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.059613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.059627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.059679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.059693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.857 #17 NEW cov: 12389 ft: 14363 corp: 8/111b lim: 35 exec/s: 0 rss: 73Mb L: 21/35 MS: 2 ChangeBit-InsertRepeatedBytes- 00:06:39.857 [2024-09-30 21:50:24.099630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.099656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.099713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.099727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.099782] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.099796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.857 #18 NEW cov: 12389 ft: 14451 corp: 9/133b lim: 35 exec/s: 0 rss: 73Mb L: 22/35 MS: 1 InsertByte- 00:06:39.857 [2024-09-30 21:50:24.159679] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.159706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.159757] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.159771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.857 #19 NEW cov: 12389 ft: 14756 corp: 10/153b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 1 CrossOver- 00:06:39.857 [2024-09-30 21:50:24.220259] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.220286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.220341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.220355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.220428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.220444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:39.857 [2024-09-30 21:50:24.220510] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:39.857 [2024-09-30 21:50:24.220531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:06:40.116 #20 NEW cov: 12389 ft: 14835 corp: 11/188b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBit- 00:06:40.116 [2024-09-30 21:50:24.280000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.116 [2024-09-30 21:50:24.280026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.116 [2024-09-30 21:50:24.280083] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.116 [2024-09-30 21:50:24.280098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.116 #21 NEW cov: 12389 ft: 14900 corp: 12/208b lim: 35 exec/s: 0 rss: 73Mb L: 20/35 MS: 1 EraseBytes- 00:06:40.116 [2024-09-30 21:50:24.320341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.116 [2024-09-30 21:50:24.320367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.116 [2024-09-30 21:50:24.320426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.116 [2024-09-30 21:50:24.320441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.320499] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.320513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.320569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:7 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.320583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.117 #22 NEW cov: 12389 ft: 14913 corp: 13/237b lim: 35 exec/s: 0 rss: 73Mb L: 29/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.117 [2024-09-30 21:50:24.360486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.360513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.360569] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.360584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.360642] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.360656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.360713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.360730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.117 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:40.117 #23 NEW cov: 12412 ft: 14955 corp: 14/265b lim: 35 exec/s: 0 rss: 74Mb L: 28/35 MS: 1 InsertRepeatedBytes- 00:06:40.117 [2024-09-30 21:50:24.420640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.420667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.420720] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.420734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.117 [2024-09-30 21:50:24.420808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.420824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.117 NEW_FUNC[1/1]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:06:40.117 #24 NEW cov: 12450 ft: 15004 corp: 15/293b lim: 35 exec/s: 0 rss: 74Mb L: 28/35 MS: 1 ChangeBit- 00:06:40.117 [2024-09-30 21:50:24.480465] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.117 [2024-09-30 21:50:24.480491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.376 #25 NEW cov: 12450 ft: 15025 corp: 16/302b lim: 35 exec/s: 25 rss: 74Mb L: 9/35 MS: 1 InsertByte- 00:06:40.376 [2024-09-30 21:50:24.540593] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.540619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.376 #26 NEW cov: 12450 ft: 15045 corp: 17/311b lim: 35 exec/s: 26 rss: 74Mb L: 9/35 MS: 1 ChangeBit- 00:06:40.376 [2024-09-30 21:50:24.600788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.600814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.376 #27 NEW cov: 12450 ft: 15105 corp: 18/320b lim: 35 exec/s: 27 rss: 74Mb L: 9/35 MS: 1 ChangeByte- 00:06:40.376 [2024-09-30 21:50:24.661050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.661076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.376 [2024-09-30 21:50:24.661134] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.661148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.376 #28 NEW cov: 12450 ft: 15113 corp: 19/340b lim: 35 exec/s: 28 rss: 74Mb L: 20/35 MS: 1 CopyPart- 00:06:40.376 [2024-09-30 21:50:24.721344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.721370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.376 [2024-09-30 21:50:24.721440] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.721463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.376 [2024-09-30 21:50:24.721539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.376 [2024-09-30 21:50:24.721559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.635 #29 NEW cov: 12450 ft: 15142 corp: 20/361b lim: 35 exec/s: 29 rss: 74Mb L: 21/35 MS: 1 InsertRepeatedBytes- 00:06:40.635 [2024-09-30 21:50:24.761431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.761457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.761513] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.761527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.761581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.761594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.635 #30 NEW cov: 12450 ft: 15146 corp: 21/382b lim: 35 exec/s: 30 rss: 74Mb L: 21/35 MS: 1 ShuffleBytes- 00:06:40.635 [2024-09-30 21:50:24.801688] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.801713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.801772] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.801786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.801843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.801857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.801916] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000374 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.801930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.635 #31 NEW cov: 12450 ft: 15165 corp: 22/410b lim: 35 exec/s: 31 rss: 74Mb L: 28/35 MS: 1 CopyPart- 00:06:40.635 [2024-09-30 21:50:24.861493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.861520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.635 #32 NEW cov: 12450 ft: 15185 corp: 23/417b lim: 35 exec/s: 32 rss: 74Mb L: 7/35 MS: 1 CrossOver- 00:06:40.635 [2024-09-30 21:50:24.901710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.901735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.901794] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000025a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.901809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.635 #33 NEW cov: 12450 ft: 15207 corp: 24/431b lim: 35 exec/s: 33 rss: 74Mb L: 14/35 MS: 1 InsertRepeatedBytes- 00:06:40.635 [2024-09-30 21:50:24.941701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000560 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.941726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.635 #34 NEW cov: 12450 ft: 15230 corp: 25/441b lim: 35 exec/s: 34 rss: 74Mb L: 10/35 MS: 1 InsertByte- 00:06:40.635 [2024-09-30 21:50:24.982054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007aa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.982079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.982132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.982150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.635 [2024-09-30 21:50:24.982224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000025a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.635 [2024-09-30 21:50:24.982246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.894 #35 NEW cov: 12450 ft: 15243 corp: 26/463b lim: 35 exec/s: 35 rss: 74Mb L: 22/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:06:40.894 [2024-09-30 21:50:25.042381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.042406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.042464] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.042478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.042536] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.042550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.042605] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:7 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.042619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.894 #36 NEW cov: 12450 ft: 15248 corp: 27/492b lim: 35 exec/s: 36 rss: 75Mb L: 29/35 MS: 1 CopyPart- 00:06:40.894 [2024-09-30 21:50:25.102556] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.102582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.102641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000001d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.102656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.102712] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.102726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.102785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:7 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.102799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:06:40.894 #37 NEW cov: 12450 ft: 15253 corp: 28/521b lim: 35 exec/s: 37 rss: 75Mb L: 29/35 MS: 1 ChangeBinInt- 00:06:40.894 [2024-09-30 21:50:25.162596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.162622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.162681] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.162695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:40.894 [2024-09-30 21:50:25.162751] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000010d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.162765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:40.894 #38 NEW cov: 12450 ft: 15275 corp: 29/544b lim: 35 exec/s: 38 rss: 75Mb L: 23/35 MS: 1 InsertByte- 00:06:40.894 [2024-09-30 21:50:25.222508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.894 [2024-09-30 21:50:25.222533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:40.894 #42 NEW cov: 12450 ft: 15291 corp: 30/556b lim: 35 exec/s: 42 rss: 75Mb L: 12/35 MS: 4 EraseBytes-ShuffleBytes-CopyPart-CopyPart- 00:06:41.154 [2024-09-30 21:50:25.262828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.262856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.154 [2024-09-30 21:50:25.262911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.262928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.154 [2024-09-30 21:50:25.263004] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.263028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.154 #43 NEW cov: 12450 ft: 15297 corp: 31/578b lim: 35 exec/s: 43 rss: 75Mb L: 22/35 MS: 1 ChangeBit- 00:06:41.154 [2024-09-30 21:50:25.302664] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000500 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.302690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.154 #44 NEW cov: 12450 ft: 15305 corp: 32/585b lim: 35 exec/s: 44 rss: 75Mb L: 7/35 MS: 1 ChangeBinInt- 00:06:41.154 #45 NEW cov: 12450 ft: 15315 corp: 33/592b lim: 35 exec/s: 45 rss: 75Mb L: 7/35 MS: 1 CopyPart- 00:06:41.154 [2024-09-30 21:50:25.403182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:4 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.403208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.154 [2024-09-30 21:50:25.403264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:5 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.403282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:06:41.154 [2024-09-30 21:50:25.403339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST MEM BUFFER cid:6 cdw10:0000000d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.403353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:06:41.154 #46 NEW cov: 12450 ft: 15338 corp: 34/614b lim: 35 exec/s: 46 rss: 75Mb L: 22/35 MS: 1 ChangeByte- 00:06:41.154 [2024-09-30 21:50:25.463102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.154 [2024-09-30 21:50:25.463126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:06:41.154 #47 NEW cov: 12450 ft: 15351 corp: 35/627b lim: 35 exec/s: 23 rss: 75Mb L: 13/35 MS: 1 ShuffleBytes- 00:06:41.154 #47 DONE cov: 12450 ft: 15351 corp: 35/627b lim: 35 exec/s: 23 rss: 75Mb 00:06:41.154 ###### Recommended dictionary. ###### 00:06:41.154 "\001\000\000\000\000\000\000\000" # Uses: 1 00:06:41.154 ###### End of recommended dictionary. ###### 00:06:41.154 Done 47 runs in 2 second(s) 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:41.414 21:50:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:06:41.414 [2024-09-30 21:50:25.657671] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:41.414 [2024-09-30 21:50:25.657746] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051472 ] 00:06:41.673 [2024-09-30 21:50:25.843573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.674 [2024-09-30 21:50:25.910396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.674 [2024-09-30 21:50:25.969792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.674 [2024-09-30 21:50:25.986178] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:06:41.674 INFO: Running with entropic power schedule (0xFF, 100). 00:06:41.674 INFO: Seed: 539465487 00:06:41.674 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:41.674 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:41.674 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:06:41.674 INFO: A corpus is not provided, starting from an empty corpus 00:06:41.674 #2 INITED exec/s: 0 rss: 65Mb 00:06:41.674 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:41.674 This may also happen if the target rejected all inputs we tried so far 00:06:41.674 [2024-09-30 21:50:26.041409] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:41.674 [2024-09-30 21:50:26.041441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.191 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:06:42.192 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:42.192 #10 NEW cov: 12275 ft: 12244 corp: 2/37b lim: 105 exec/s: 0 rss: 73Mb L: 36/36 MS: 3 ChangeByte-CrossOver-InsertRepeatedBytes- 00:06:42.192 [2024-09-30 21:50:26.372186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.192 [2024-09-30 21:50:26.372222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.192 #11 NEW cov: 12388 ft: 12903 corp: 3/73b lim: 105 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CMP- DE: "\001\000\000\002"- 00:06:42.192 [2024-09-30 21:50:26.432278] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.192 [2024-09-30 21:50:26.432306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.192 #12 NEW cov: 12394 ft: 13122 corp: 4/109b lim: 105 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 CopyPart- 00:06:42.192 [2024-09-30 21:50:26.472333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.192 [2024-09-30 21:50:26.472364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.192 #13 NEW cov: 12479 ft: 13462 corp: 5/145b lim: 105 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeBinInt- 00:06:42.192 [2024-09-30 21:50:26.532500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.192 [2024-09-30 21:50:26.532528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.192 #14 NEW cov: 12479 ft: 13520 corp: 6/181b lim: 105 exec/s: 0 rss: 73Mb L: 36/36 MS: 1 ChangeByte- 00:06:42.451 [2024-09-30 21:50:26.572734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.572761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.451 [2024-09-30 21:50:26.572798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993500522103490 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.572814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.451 #15 NEW cov: 12479 ft: 14029 corp: 7/228b lim: 105 exec/s: 0 rss: 73Mb L: 47/47 MS: 1 CopyPart- 00:06:42.451 [2024-09-30 21:50:26.632760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.632788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.451 #16 NEW cov: 12479 ft: 14144 corp: 8/264b lim: 105 exec/s: 0 rss: 73Mb L: 36/47 MS: 1 ChangeBinInt- 00:06:42.451 [2024-09-30 21:50:26.672878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.672906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.451 #17 NEW cov: 12479 ft: 14159 corp: 9/300b lim: 105 exec/s: 0 rss: 73Mb L: 36/47 MS: 1 CrossOver- 00:06:42.451 [2024-09-30 21:50:26.713001] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.713029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.451 #18 NEW cov: 12479 ft: 14172 corp: 10/336b lim: 105 exec/s: 0 rss: 73Mb L: 36/47 MS: 1 ChangeByte- 00:06:42.451 [2024-09-30 21:50:26.773280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.773313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.451 [2024-09-30 21:50:26.773350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993529412469442 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.451 [2024-09-30 21:50:26.773366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.451 #19 NEW cov: 12479 ft: 14237 corp: 11/381b lim: 105 exec/s: 0 rss: 74Mb L: 45/47 MS: 1 InsertRepeatedBytes- 00:06:42.710 [2024-09-30 21:50:26.833314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.710 [2024-09-30 21:50:26.833342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.710 #20 NEW cov: 12479 ft: 14263 corp: 12/418b lim: 105 exec/s: 0 rss: 74Mb L: 37/47 MS: 1 CrossOver- 00:06:42.710 [2024-09-30 21:50:26.893468] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.710 [2024-09-30 21:50:26.893496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.710 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:42.710 #21 NEW cov: 12502 ft: 14311 corp: 13/454b lim: 105 exec/s: 0 rss: 74Mb L: 36/47 MS: 1 ChangeBinInt- 00:06:42.710 [2024-09-30 21:50:26.933590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.710 [2024-09-30 21:50:26.933618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.710 #22 NEW cov: 12502 ft: 14377 corp: 14/494b lim: 105 exec/s: 0 rss: 74Mb L: 40/47 MS: 1 PersAutoDict- DE: "\001\000\000\002"- 00:06:42.710 [2024-09-30 21:50:26.973707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.711 [2024-09-30 21:50:26.973735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.711 #23 NEW cov: 12502 ft: 14414 corp: 15/534b lim: 105 exec/s: 0 rss: 74Mb L: 40/47 MS: 1 CopyPart- 00:06:42.711 [2024-09-30 21:50:27.033950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2432720887393469122 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.711 [2024-09-30 21:50:27.033979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.711 [2024-09-30 21:50:27.034015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993500522103490 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.711 [2024-09-30 21:50:27.034032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.711 #24 NEW cov: 12502 ft: 14418 corp: 16/581b lim: 105 exec/s: 24 rss: 74Mb L: 47/47 MS: 1 ChangeByte- 00:06:42.970 [2024-09-30 21:50:27.094139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.094166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.970 [2024-09-30 21:50:27.094204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:2755 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.094219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.970 #25 NEW cov: 12502 ft: 14484 corp: 17/629b lim: 105 exec/s: 25 rss: 74Mb L: 48/48 MS: 1 CrossOver- 00:06:42.970 [2024-09-30 21:50:27.134150] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:5570193307235060448 len:19790 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.134177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.970 #30 NEW cov: 12502 ft: 14508 corp: 18/664b lim: 105 exec/s: 30 rss: 74Mb L: 35/48 MS: 5 PersAutoDict-ChangeByte-InsertByte-ShuffleBytes-InsertRepeatedBytes- DE: "\001\000\000\002"- 00:06:42.970 [2024-09-30 21:50:27.174608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.174636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.970 [2024-09-30 21:50:27.174688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.174701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.970 [2024-09-30 21:50:27.174753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.174769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:42.970 [2024-09-30 21:50:27.174820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8989961946752908412 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.174837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:42.970 #31 NEW cov: 12502 ft: 15095 corp: 19/755b lim: 105 exec/s: 31 rss: 74Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:06:42.970 [2024-09-30 21:50:27.234474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033992964859151042 len:15678 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.234504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.970 #32 NEW cov: 12502 ft: 15131 corp: 20/791b lim: 105 exec/s: 32 rss: 74Mb L: 36/91 MS: 1 ChangeBinInt- 00:06:42.970 [2024-09-30 21:50:27.274694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.274724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:42.970 [2024-09-30 21:50:27.274771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:48067 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.274788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:42.970 #33 NEW cov: 12502 ft: 15144 corp: 21/839b lim: 105 exec/s: 33 rss: 74Mb L: 48/91 MS: 1 ShuffleBytes- 00:06:42.970 [2024-09-30 21:50:27.334717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.970 [2024-09-30 21:50:27.334747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.229 #34 NEW cov: 12502 ft: 15166 corp: 22/876b lim: 105 exec/s: 34 rss: 74Mb L: 37/91 MS: 1 InsertByte- 00:06:43.229 [2024-09-30 21:50:27.374816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.229 [2024-09-30 21:50:27.374845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.229 #35 NEW cov: 12502 ft: 15191 corp: 23/914b lim: 105 exec/s: 35 rss: 74Mb L: 38/91 MS: 1 InsertByte- 00:06:43.229 [2024-09-30 21:50:27.435122] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:2648893669507252930 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.229 [2024-09-30 21:50:27.435150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.229 [2024-09-30 21:50:27.435186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:48067 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.229 [2024-09-30 21:50:27.435200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.229 #36 NEW cov: 12502 ft: 15288 corp: 24/962b lim: 105 exec/s: 36 rss: 74Mb L: 48/91 MS: 1 ChangeByte- 00:06:43.229 [2024-09-30 21:50:27.495296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.229 [2024-09-30 21:50:27.495329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.230 [2024-09-30 21:50:27.495370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:49920 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.230 [2024-09-30 21:50:27.495387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.230 #37 NEW cov: 12502 ft: 15333 corp: 25/1006b lim: 105 exec/s: 37 rss: 75Mb L: 44/91 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\017"- 00:06:43.230 [2024-09-30 21:50:27.555462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499819714 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.230 [2024-09-30 21:50:27.555490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.230 [2024-09-30 21:50:27.555527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:49852 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.230 [2024-09-30 21:50:27.555547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.230 #38 NEW cov: 12502 ft: 15341 corp: 26/1050b lim: 105 exec/s: 38 rss: 75Mb L: 44/91 MS: 1 CrossOver- 00:06:43.489 [2024-09-30 21:50:27.615835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.615865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.615912] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.615928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.615979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.615995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.616047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8989961946752908412 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.616063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.489 #39 NEW cov: 12502 ft: 15373 corp: 27/1141b lim: 105 exec/s: 39 rss: 75Mb L: 91/91 MS: 1 ChangeByte- 00:06:43.489 [2024-09-30 21:50:27.675672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.675700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.489 #40 NEW cov: 12502 ft: 15399 corp: 28/1177b lim: 105 exec/s: 40 rss: 75Mb L: 36/91 MS: 1 ChangeByte- 00:06:43.489 [2024-09-30 21:50:27.735931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.735959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.735995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:2755 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.736010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.489 #41 NEW cov: 12502 ft: 15402 corp: 29/1225b lim: 105 exec/s: 41 rss: 75Mb L: 48/91 MS: 1 CopyPart- 00:06:43.489 [2024-09-30 21:50:27.776252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.776280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.776336] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.776352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.776406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.776421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.776474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:8989822837057158268 len:15678 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.776494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.489 #42 NEW cov: 12502 ft: 15423 corp: 30/1316b lim: 105 exec/s: 42 rss: 75Mb L: 91/91 MS: 1 ChangeBinInt- 00:06:43.489 [2024-09-30 21:50:27.816380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.816407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.489 [2024-09-30 21:50:27.816462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.489 [2024-09-30 21:50:27.816477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.490 [2024-09-30 21:50:27.816529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.490 [2024-09-30 21:50:27.816545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.490 [2024-09-30 21:50:27.816598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:14033993500522103490 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.490 [2024-09-30 21:50:27.816613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.490 #43 NEW cov: 12502 ft: 15429 corp: 31/1405b lim: 105 exec/s: 43 rss: 75Mb L: 89/91 MS: 1 InsertRepeatedBytes- 00:06:43.490 [2024-09-30 21:50:27.856562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.490 [2024-09-30 21:50:27.856590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.490 [2024-09-30 21:50:27.856644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.490 [2024-09-30 21:50:27.856661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.490 [2024-09-30 21:50:27.856715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.490 [2024-09-30 21:50:27.856731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:43.490 [2024-09-30 21:50:27.856784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:14033993500522103490 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.490 [2024-09-30 21:50:27.856801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:43.750 #44 NEW cov: 12502 ft: 15443 corp: 32/1501b lim: 105 exec/s: 44 rss: 75Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:06:43.750 [2024-09-30 21:50:27.916331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069595135999 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.750 [2024-09-30 21:50:27.916359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.750 #45 NEW cov: 12502 ft: 15492 corp: 33/1541b lim: 105 exec/s: 45 rss: 75Mb L: 40/96 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\017"- 00:06:43.750 [2024-09-30 21:50:27.956570] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.750 [2024-09-30 21:50:27.956597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.750 [2024-09-30 21:50:27.956644] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14033993530586874562 len:2755 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.750 [2024-09-30 21:50:27.956661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:43.750 #46 NEW cov: 12502 ft: 15553 corp: 34/1597b lim: 105 exec/s: 46 rss: 75Mb L: 56/96 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\017"- 00:06:43.750 [2024-09-30 21:50:27.996535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14033993527499866818 len:49859 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:43.750 [2024-09-30 21:50:27.996564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:43.750 #52 NEW cov: 12502 ft: 15577 corp: 35/1624b lim: 105 exec/s: 26 rss: 75Mb L: 27/96 MS: 1 EraseBytes- 00:06:43.750 #52 DONE cov: 12502 ft: 15577 corp: 35/1624b lim: 105 exec/s: 26 rss: 75Mb 00:06:43.750 ###### Recommended dictionary. ###### 00:06:43.750 "\001\000\000\002" # Uses: 2 00:06:43.750 "\377\377\377\377\377\377\377\017" # Uses: 3 00:06:43.750 ###### End of recommended dictionary. ###### 00:06:43.750 Done 52 runs in 2 second(s) 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:44.009 21:50:28 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:06:44.009 [2024-09-30 21:50:28.191477] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:44.009 [2024-09-30 21:50:28.191561] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052007 ] 00:06:44.010 [2024-09-30 21:50:28.376375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.269 [2024-09-30 21:50:28.442528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.269 [2024-09-30 21:50:28.501273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.269 [2024-09-30 21:50:28.517651] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:06:44.269 INFO: Running with entropic power schedule (0xFF, 100). 00:06:44.269 INFO: Seed: 3071465905 00:06:44.269 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:44.269 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:44.269 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:06:44.269 INFO: A corpus is not provided, starting from an empty corpus 00:06:44.269 #2 INITED exec/s: 0 rss: 66Mb 00:06:44.269 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:44.269 This may also happen if the target rejected all inputs we tried so far 00:06:44.269 [2024-09-30 21:50:28.562501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.269 [2024-09-30 21:50:28.562534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.269 [2024-09-30 21:50:28.562584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.269 [2024-09-30 21:50:28.562603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.269 [2024-09-30 21:50:28.562632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.269 [2024-09-30 21:50:28.562649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.269 [2024-09-30 21:50:28.562676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.269 [2024-09-30 21:50:28.562692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.839 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:06:44.839 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:44.839 #5 NEW cov: 12296 ft: 12291 corp: 2/106b lim: 120 exec/s: 0 rss: 73Mb L: 105/105 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:06:44.839 [2024-09-30 21:50:28.945345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.839 [2024-09-30 21:50:28.945387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.839 [2024-09-30 21:50:28.945514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.839 [2024-09-30 21:50:28.945537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.839 [2024-09-30 21:50:28.945648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.839 [2024-09-30 21:50:28.945672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.839 [2024-09-30 21:50:28.945795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12080692964266321831 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.839 [2024-09-30 21:50:28.945819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.839 #6 NEW cov: 12409 ft: 13113 corp: 3/222b lim: 120 exec/s: 0 rss: 73Mb L: 116/116 MS: 1 InsertRepeatedBytes- 00:06:44.839 [2024-09-30 21:50:29.005267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.839 [2024-09-30 21:50:29.005300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.005380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.005403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.005518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.005543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.005663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.005688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:44.840 #7 NEW cov: 12415 ft: 13374 corp: 4/327b lim: 120 exec/s: 0 rss: 73Mb L: 105/116 MS: 1 ShuffleBytes- 00:06:44.840 [2024-09-30 21:50:29.054949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.054986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.055105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.055133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.840 #11 NEW cov: 12500 ft: 14046 corp: 5/395b lim: 120 exec/s: 0 rss: 73Mb L: 68/116 MS: 4 ChangeBit-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:06:44.840 [2024-09-30 21:50:29.095027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.095061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.095181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.095204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.840 #12 NEW cov: 12500 ft: 14174 corp: 6/458b lim: 120 exec/s: 0 rss: 73Mb L: 63/116 MS: 1 EraseBytes- 00:06:44.840 [2024-09-30 21:50:29.155257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.155287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.155414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.155434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:44.840 #13 NEW cov: 12500 ft: 14325 corp: 7/526b lim: 120 exec/s: 0 rss: 73Mb L: 68/116 MS: 1 CrossOver- 00:06:44.840 [2024-09-30 21:50:29.205320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.205356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:44.840 [2024-09-30 21:50:29.205453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:44.840 [2024-09-30 21:50:29.205472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.100 #14 NEW cov: 12500 ft: 14409 corp: 8/594b lim: 120 exec/s: 0 rss: 73Mb L: 68/116 MS: 1 ChangeBit- 00:06:45.101 [2024-09-30 21:50:29.265622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.265655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.265764] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.265788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.101 #15 NEW cov: 12500 ft: 14461 corp: 9/657b lim: 120 exec/s: 0 rss: 73Mb L: 63/116 MS: 1 ShuffleBytes- 00:06:45.101 [2024-09-30 21:50:29.325703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.325737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.325860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.325883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.101 #16 NEW cov: 12500 ft: 14501 corp: 10/727b lim: 120 exec/s: 0 rss: 73Mb L: 70/116 MS: 1 CMP- DE: "\001\000"- 00:06:45.101 [2024-09-30 21:50:29.375747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.375780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.375902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.375926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.101 #17 NEW cov: 12500 ft: 14525 corp: 11/786b lim: 120 exec/s: 0 rss: 73Mb L: 59/116 MS: 1 EraseBytes- 00:06:45.101 [2024-09-30 21:50:29.415913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.415949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.416072] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.416099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.101 #18 NEW cov: 12500 ft: 14561 corp: 12/848b lim: 120 exec/s: 0 rss: 73Mb L: 62/116 MS: 1 EraseBytes- 00:06:45.101 [2024-09-30 21:50:29.466672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.466709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.466792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.466811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.466921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.466943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.101 [2024-09-30 21:50:29.467064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.101 [2024-09-30 21:50:29.467088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.360 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:45.360 #19 NEW cov: 12523 ft: 14602 corp: 13/953b lim: 120 exec/s: 0 rss: 73Mb L: 105/116 MS: 1 CopyPart- 00:06:45.360 [2024-09-30 21:50:29.505909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:440729600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.360 [2024-09-30 21:50:29.505936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.360 #21 NEW cov: 12523 ft: 15408 corp: 14/994b lim: 120 exec/s: 0 rss: 74Mb L: 41/116 MS: 2 CrossOver-InsertRepeatedBytes- 00:06:45.360 [2024-09-30 21:50:29.566889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.360 [2024-09-30 21:50:29.566919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.360 [2024-09-30 21:50:29.566983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5044031579522155845 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.360 [2024-09-30 21:50:29.567006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.360 [2024-09-30 21:50:29.567123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.360 [2024-09-30 21:50:29.567147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.360 [2024-09-30 21:50:29.567263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4991471928960090111 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.360 [2024-09-30 21:50:29.567285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.360 #22 NEW cov: 12523 ft: 15421 corp: 15/1109b lim: 120 exec/s: 22 rss: 74Mb L: 115/116 MS: 1 InsertRepeatedBytes- 00:06:45.360 [2024-09-30 21:50:29.616983] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.617013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.361 [2024-09-30 21:50:29.617092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:5044031579522155845 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.617113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.361 [2024-09-30 21:50:29.617236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.617260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.361 [2024-09-30 21:50:29.617371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4991471928960090111 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.617393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.361 #23 NEW cov: 12523 ft: 15438 corp: 16/1224b lim: 120 exec/s: 23 rss: 74Mb L: 115/116 MS: 1 ShuffleBytes- 00:06:45.361 [2024-09-30 21:50:29.686888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72057589910743809 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.686920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.361 [2024-09-30 21:50:29.687007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.687031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.361 [2024-09-30 21:50:29.687154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.687176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.361 #28 NEW cov: 12523 ft: 15850 corp: 17/1303b lim: 120 exec/s: 28 rss: 74Mb L: 79/116 MS: 5 ShuffleBytes-ChangeByte-CMP-PersAutoDict-InsertRepeatedBytes- DE: "\012\000"-"\001\000"- 00:06:45.361 [2024-09-30 21:50:29.726623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10995557007360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.361 [2024-09-30 21:50:29.726654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.620 #29 NEW cov: 12523 ft: 15917 corp: 18/1344b lim: 120 exec/s: 29 rss: 74Mb L: 41/116 MS: 1 PersAutoDict- DE: "\012\000"- 00:06:45.620 [2024-09-30 21:50:29.787394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.787424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.787511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.787530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.787647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.787673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.787781] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.787803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:45.620 #31 NEW cov: 12523 ft: 15965 corp: 19/1456b lim: 120 exec/s: 31 rss: 74Mb L: 112/116 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:45.620 [2024-09-30 21:50:29.837193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.837223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.837356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.837381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.620 #32 NEW cov: 12523 ft: 15995 corp: 20/1515b lim: 120 exec/s: 32 rss: 74Mb L: 59/116 MS: 1 ChangeByte- 00:06:45.620 [2024-09-30 21:50:29.907423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.907456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.907578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.907600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.620 #33 NEW cov: 12523 ft: 16016 corp: 21/1583b lim: 120 exec/s: 33 rss: 74Mb L: 68/116 MS: 1 ChangeBit- 00:06:45.620 [2024-09-30 21:50:29.967732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.967762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.967857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.967881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.620 [2024-09-30 21:50:29.967997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.620 [2024-09-30 21:50:29.968015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:45.880 #34 NEW cov: 12523 ft: 16084 corp: 22/1663b lim: 120 exec/s: 34 rss: 74Mb L: 80/116 MS: 1 EraseBytes- 00:06:45.880 [2024-09-30 21:50:30.037748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72057589910743809 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.880 [2024-09-30 21:50:30.037783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.881 [2024-09-30 21:50:30.037878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.037898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.881 #35 NEW cov: 12523 ft: 16110 corp: 23/1715b lim: 120 exec/s: 35 rss: 74Mb L: 52/116 MS: 1 EraseBytes- 00:06:45.881 [2024-09-30 21:50:30.107968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4991471925105870149 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.107995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.881 [2024-09-30 21:50:30.108115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4991471925827290437 len:17734 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.108133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.881 #36 NEW cov: 12523 ft: 16264 corp: 24/1785b lim: 120 exec/s: 36 rss: 74Mb L: 70/116 MS: 1 ShuffleBytes- 00:06:45.881 [2024-09-30 21:50:30.158114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:72057589910743809 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.158143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.881 [2024-09-30 21:50:30.158262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:72057594037873409 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.158288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.881 #37 NEW cov: 12523 ft: 16308 corp: 25/1837b lim: 120 exec/s: 37 rss: 74Mb L: 52/116 MS: 1 CopyPart- 00:06:45.881 [2024-09-30 21:50:30.218206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:440729600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.218239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:45.881 [2024-09-30 21:50:30.218358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:68436008894464 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:45.881 [2024-09-30 21:50:30.218382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:45.881 #38 NEW cov: 12523 ft: 16327 corp: 26/1893b lim: 120 exec/s: 38 rss: 74Mb L: 56/116 MS: 1 CrossOver- 00:06:46.146 [2024-09-30 21:50:30.258123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:10995557007360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.258149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.146 #39 NEW cov: 12523 ft: 16343 corp: 27/1934b lim: 120 exec/s: 39 rss: 74Mb L: 41/116 MS: 1 ChangeBit- 00:06:46.146 [2024-09-30 21:50:30.328638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:440860672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.328671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.328755] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:68436008894464 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.328782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.146 #40 NEW cov: 12523 ft: 16357 corp: 28/1990b lim: 120 exec/s: 40 rss: 75Mb L: 56/116 MS: 1 ChangeBinInt- 00:06:46.146 [2024-09-30 21:50:30.399267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.399298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.399372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.399393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.399507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.399531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.399653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:12080692964266321831 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.399678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.146 #41 NEW cov: 12523 ft: 16368 corp: 29/2106b lim: 120 exec/s: 41 rss: 75Mb L: 116/116 MS: 1 ChangeASCIIInt- 00:06:46.146 [2024-09-30 21:50:30.469541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446743781651775487 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.469577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.469696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.469721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.469848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.469872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.146 [2024-09-30 21:50:30.469987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.146 [2024-09-30 21:50:30.470010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.146 #42 NEW cov: 12523 ft: 16388 corp: 30/2219b lim: 120 exec/s: 42 rss: 75Mb L: 113/116 MS: 1 InsertByte- 00:06:46.405 [2024-09-30 21:50:30.539701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.405 [2024-09-30 21:50:30.539749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.405 [2024-09-30 21:50:30.539868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.405 [2024-09-30 21:50:30.539895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.405 [2024-09-30 21:50:30.540025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.405 [2024-09-30 21:50:30.540048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.405 [2024-09-30 21:50:30.540173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:4485090715960753726 len:15935 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:06:46.405 [2024-09-30 21:50:30.540198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:46.405 #48 NEW cov: 12523 ft: 16406 corp: 31/2324b lim: 120 exec/s: 24 rss: 75Mb L: 105/116 MS: 1 PersAutoDict- DE: "\001\000"- 00:06:46.405 #48 DONE cov: 12523 ft: 16406 corp: 31/2324b lim: 120 exec/s: 24 rss: 75Mb 00:06:46.405 ###### Recommended dictionary. ###### 00:06:46.405 "\001\000" # Uses: 2 00:06:46.405 "\012\000" # Uses: 1 00:06:46.405 ###### End of recommended dictionary. ###### 00:06:46.405 Done 48 runs in 2 second(s) 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:46.405 21:50:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:06:46.405 [2024-09-30 21:50:30.734495] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:46.405 [2024-09-30 21:50:30.734568] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052498 ] 00:06:46.663 [2024-09-30 21:50:30.916694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.663 [2024-09-30 21:50:30.984053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.922 [2024-09-30 21:50:31.043317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.922 [2024-09-30 21:50:31.059677] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:06:46.922 INFO: Running with entropic power schedule (0xFF, 100). 00:06:46.922 INFO: Seed: 1318473902 00:06:46.922 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:46.922 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:46.922 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:06:46.922 INFO: A corpus is not provided, starting from an empty corpus 00:06:46.922 #2 INITED exec/s: 0 rss: 64Mb 00:06:46.922 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:46.922 This may also happen if the target rejected all inputs we tried so far 00:06:46.922 [2024-09-30 21:50:31.115149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:46.922 [2024-09-30 21:50:31.115179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:46.923 [2024-09-30 21:50:31.115221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:46.923 [2024-09-30 21:50:31.115236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:46.923 [2024-09-30 21:50:31.115285] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:46.923 [2024-09-30 21:50:31.115300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:46.923 [2024-09-30 21:50:31.115357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:46.923 [2024-09-30 21:50:31.115372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.182 NEW_FUNC[1/713]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:06:47.182 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:47.182 #13 NEW cov: 12236 ft: 12234 corp: 2/82b lim: 100 exec/s: 0 rss: 72Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:06:47.182 [2024-09-30 21:50:31.446206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.182 [2024-09-30 21:50:31.446245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.182 [2024-09-30 21:50:31.446303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.182 [2024-09-30 21:50:31.446326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.182 [2024-09-30 21:50:31.446381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.182 [2024-09-30 21:50:31.446405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.182 [2024-09-30 21:50:31.446484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.182 [2024-09-30 21:50:31.446506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.182 NEW_FUNC[1/1]: 0x1f3b988 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:807 00:06:47.182 #19 NEW cov: 12352 ft: 12914 corp: 3/163b lim: 100 exec/s: 0 rss: 72Mb L: 81/81 MS: 1 ChangeByte- 00:06:47.182 [2024-09-30 21:50:31.506255] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.182 [2024-09-30 21:50:31.506282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.182 [2024-09-30 21:50:31.506338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.182 [2024-09-30 21:50:31.506353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.182 [2024-09-30 21:50:31.506406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.182 [2024-09-30 21:50:31.506421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.182 [2024-09-30 21:50:31.506477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.182 [2024-09-30 21:50:31.506492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.182 #25 NEW cov: 12358 ft: 13101 corp: 4/244b lim: 100 exec/s: 0 rss: 72Mb L: 81/81 MS: 1 ChangeByte- 00:06:47.440 [2024-09-30 21:50:31.566418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.440 [2024-09-30 21:50:31.566445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.440 [2024-09-30 21:50:31.566497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.440 [2024-09-30 21:50:31.566512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.440 [2024-09-30 21:50:31.566563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.440 [2024-09-30 21:50:31.566582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.440 [2024-09-30 21:50:31.566636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.440 [2024-09-30 21:50:31.566651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.440 #26 NEW cov: 12443 ft: 13341 corp: 5/326b lim: 100 exec/s: 0 rss: 72Mb L: 82/82 MS: 1 InsertByte- 00:06:47.440 [2024-09-30 21:50:31.626573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.440 [2024-09-30 21:50:31.626602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.440 [2024-09-30 21:50:31.626651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.440 [2024-09-30 21:50:31.626666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.440 [2024-09-30 21:50:31.626718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.440 [2024-09-30 21:50:31.626734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.626790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.441 [2024-09-30 21:50:31.626805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.441 #27 NEW cov: 12443 ft: 13569 corp: 6/407b lim: 100 exec/s: 0 rss: 72Mb L: 81/82 MS: 1 ChangeByte- 00:06:47.441 [2024-09-30 21:50:31.666663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.441 [2024-09-30 21:50:31.666690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.666742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.441 [2024-09-30 21:50:31.666754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.666808] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.441 [2024-09-30 21:50:31.666824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.666878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.441 [2024-09-30 21:50:31.666894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.441 #28 NEW cov: 12443 ft: 13700 corp: 7/488b lim: 100 exec/s: 0 rss: 72Mb L: 81/82 MS: 1 ChangeBit- 00:06:47.441 [2024-09-30 21:50:31.706774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.441 [2024-09-30 21:50:31.706800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.706855] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.441 [2024-09-30 21:50:31.706866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.706936] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.441 [2024-09-30 21:50:31.706955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.707025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.441 [2024-09-30 21:50:31.707050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.441 #29 NEW cov: 12443 ft: 13818 corp: 8/570b lim: 100 exec/s: 0 rss: 72Mb L: 82/82 MS: 1 ChangeBinInt- 00:06:47.441 [2024-09-30 21:50:31.766932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.441 [2024-09-30 21:50:31.766958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.767016] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.441 [2024-09-30 21:50:31.767031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.767084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.441 [2024-09-30 21:50:31.767099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.767154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.441 [2024-09-30 21:50:31.767169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.441 #30 NEW cov: 12443 ft: 13907 corp: 9/652b lim: 100 exec/s: 0 rss: 72Mb L: 82/82 MS: 1 InsertByte- 00:06:47.441 [2024-09-30 21:50:31.807063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.441 [2024-09-30 21:50:31.807091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.807149] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.441 [2024-09-30 21:50:31.807164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.807221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.441 [2024-09-30 21:50:31.807236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.441 [2024-09-30 21:50:31.807292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.441 [2024-09-30 21:50:31.807314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.700 #31 NEW cov: 12443 ft: 13980 corp: 10/735b lim: 100 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 InsertByte- 00:06:47.700 [2024-09-30 21:50:31.867370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.700 [2024-09-30 21:50:31.867398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.867457] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.700 [2024-09-30 21:50:31.867473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.867528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.700 [2024-09-30 21:50:31.867543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.867598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.700 [2024-09-30 21:50:31.867613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.867669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:47.700 [2024-09-30 21:50:31.867685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.700 #35 NEW cov: 12443 ft: 14053 corp: 11/835b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 4 ShuffleBytes-CMP-ChangeByte-InsertRepeatedBytes- DE: "\011\000\000\000"- 00:06:47.700 [2024-09-30 21:50:31.907324] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.700 [2024-09-30 21:50:31.907366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.907411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.700 [2024-09-30 21:50:31.907426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.907481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.700 [2024-09-30 21:50:31.907496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.907552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.700 [2024-09-30 21:50:31.907568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.700 #36 NEW cov: 12443 ft: 14158 corp: 12/917b lim: 100 exec/s: 0 rss: 73Mb L: 82/100 MS: 1 InsertByte- 00:06:47.700 [2024-09-30 21:50:31.947610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.700 [2024-09-30 21:50:31.947638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.947694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.700 [2024-09-30 21:50:31.947707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.947762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.700 [2024-09-30 21:50:31.947777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.947831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.700 [2024-09-30 21:50:31.947846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:31.947899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:06:47.700 [2024-09-30 21:50:31.947914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:47.700 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:47.700 #37 NEW cov: 12466 ft: 14221 corp: 13/1017b lim: 100 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 ShuffleBytes- 00:06:47.700 [2024-09-30 21:50:32.007638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.700 [2024-09-30 21:50:32.007666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:32.007716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.700 [2024-09-30 21:50:32.007732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:32.007783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.700 [2024-09-30 21:50:32.007797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:32.007851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.700 [2024-09-30 21:50:32.007868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.700 #38 NEW cov: 12466 ft: 14280 corp: 14/1099b lim: 100 exec/s: 0 rss: 73Mb L: 82/100 MS: 1 ChangeByte- 00:06:47.700 [2024-09-30 21:50:32.067812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.700 [2024-09-30 21:50:32.067839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:32.067891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.700 [2024-09-30 21:50:32.067905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:32.067957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.700 [2024-09-30 21:50:32.067974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.700 [2024-09-30 21:50:32.068028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.700 [2024-09-30 21:50:32.068044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.959 #39 NEW cov: 12466 ft: 14296 corp: 15/1181b lim: 100 exec/s: 39 rss: 73Mb L: 82/100 MS: 1 InsertByte- 00:06:47.959 [2024-09-30 21:50:32.127977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.959 [2024-09-30 21:50:32.128005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.128055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.959 [2024-09-30 21:50:32.128070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.128124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.959 [2024-09-30 21:50:32.128138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.128192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.959 [2024-09-30 21:50:32.128207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.959 #40 NEW cov: 12466 ft: 14306 corp: 16/1263b lim: 100 exec/s: 40 rss: 73Mb L: 82/100 MS: 1 ChangeBit- 00:06:47.959 [2024-09-30 21:50:32.168075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.959 [2024-09-30 21:50:32.168103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.168160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.959 [2024-09-30 21:50:32.168174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.168227] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.959 [2024-09-30 21:50:32.168241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.168294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.959 [2024-09-30 21:50:32.168312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.959 #41 NEW cov: 12466 ft: 14345 corp: 17/1346b lim: 100 exec/s: 41 rss: 73Mb L: 83/100 MS: 1 InsertByte- 00:06:47.959 [2024-09-30 21:50:32.208203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.959 [2024-09-30 21:50:32.208234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.208272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.959 [2024-09-30 21:50:32.208287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.208340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.959 [2024-09-30 21:50:32.208357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.208412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.959 [2024-09-30 21:50:32.208426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.959 #42 NEW cov: 12466 ft: 14396 corp: 18/1432b lim: 100 exec/s: 42 rss: 73Mb L: 86/100 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:06:47.959 [2024-09-30 21:50:32.248290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.959 [2024-09-30 21:50:32.248321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.248380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:47.959 [2024-09-30 21:50:32.248396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.248453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:47.959 [2024-09-30 21:50:32.248468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:47.959 [2024-09-30 21:50:32.248525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:47.959 [2024-09-30 21:50:32.248541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:47.959 #43 NEW cov: 12466 ft: 14468 corp: 19/1525b lim: 100 exec/s: 43 rss: 73Mb L: 93/100 MS: 1 CrossOver- 00:06:47.959 [2024-09-30 21:50:32.308133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:47.959 [2024-09-30 21:50:32.308159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.219 #44 NEW cov: 12466 ft: 14844 corp: 20/1548b lim: 100 exec/s: 44 rss: 73Mb L: 23/100 MS: 1 CrossOver- 00:06:48.219 [2024-09-30 21:50:32.348599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.219 [2024-09-30 21:50:32.348626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.348682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.219 [2024-09-30 21:50:32.348698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.348752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.219 [2024-09-30 21:50:32.348767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.348820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.219 [2024-09-30 21:50:32.348835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.219 #45 NEW cov: 12466 ft: 14883 corp: 21/1629b lim: 100 exec/s: 45 rss: 73Mb L: 81/100 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:06:48.219 [2024-09-30 21:50:32.388705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.219 [2024-09-30 21:50:32.388732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.388784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.219 [2024-09-30 21:50:32.388800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.388852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.219 [2024-09-30 21:50:32.388866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.388921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.219 [2024-09-30 21:50:32.388935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.219 #46 NEW cov: 12466 ft: 14903 corp: 22/1711b lim: 100 exec/s: 46 rss: 73Mb L: 82/100 MS: 1 ChangeByte- 00:06:48.219 [2024-09-30 21:50:32.448887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.219 [2024-09-30 21:50:32.448913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.448967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.219 [2024-09-30 21:50:32.448982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.449036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.219 [2024-09-30 21:50:32.449050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.449104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.219 [2024-09-30 21:50:32.449118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.219 #47 NEW cov: 12466 ft: 14910 corp: 23/1792b lim: 100 exec/s: 47 rss: 73Mb L: 81/100 MS: 1 ChangeBit- 00:06:48.219 [2024-09-30 21:50:32.488990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.219 [2024-09-30 21:50:32.489016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.489072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.219 [2024-09-30 21:50:32.489087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.489138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.219 [2024-09-30 21:50:32.489153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.489209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.219 [2024-09-30 21:50:32.489223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.219 #48 NEW cov: 12466 ft: 14949 corp: 24/1874b lim: 100 exec/s: 48 rss: 73Mb L: 82/100 MS: 1 CopyPart- 00:06:48.219 [2024-09-30 21:50:32.549144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.219 [2024-09-30 21:50:32.549170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.549219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.219 [2024-09-30 21:50:32.549235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.549291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.219 [2024-09-30 21:50:32.549312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.219 [2024-09-30 21:50:32.549332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.219 [2024-09-30 21:50:32.549346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.219 #49 NEW cov: 12466 ft: 14961 corp: 25/1956b lim: 100 exec/s: 49 rss: 73Mb L: 82/100 MS: 1 ChangeBit- 00:06:48.479 [2024-09-30 21:50:32.589284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.479 [2024-09-30 21:50:32.589313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.589364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.479 [2024-09-30 21:50:32.589379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.589433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.479 [2024-09-30 21:50:32.589449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.589504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.479 [2024-09-30 21:50:32.589519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.479 #50 NEW cov: 12466 ft: 14973 corp: 26/2053b lim: 100 exec/s: 50 rss: 73Mb L: 97/100 MS: 1 InsertRepeatedBytes- 00:06:48.479 [2024-09-30 21:50:32.629106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.479 [2024-09-30 21:50:32.629132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.629167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.479 [2024-09-30 21:50:32.629183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.479 #51 NEW cov: 12466 ft: 15258 corp: 27/2098b lim: 100 exec/s: 51 rss: 73Mb L: 45/100 MS: 1 EraseBytes- 00:06:48.479 [2024-09-30 21:50:32.689555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.479 [2024-09-30 21:50:32.689582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.689639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.479 [2024-09-30 21:50:32.689654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.689709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.479 [2024-09-30 21:50:32.689725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.689782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.479 [2024-09-30 21:50:32.689796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.479 #52 NEW cov: 12466 ft: 15262 corp: 28/2182b lim: 100 exec/s: 52 rss: 73Mb L: 84/100 MS: 1 CrossOver- 00:06:48.479 [2024-09-30 21:50:32.729638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.479 [2024-09-30 21:50:32.729664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.729722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.479 [2024-09-30 21:50:32.729737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.729792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.479 [2024-09-30 21:50:32.729807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.729865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.479 [2024-09-30 21:50:32.729880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.479 #53 NEW cov: 12466 ft: 15278 corp: 29/2264b lim: 100 exec/s: 53 rss: 74Mb L: 82/100 MS: 1 ShuffleBytes- 00:06:48.479 [2024-09-30 21:50:32.789831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.479 [2024-09-30 21:50:32.789858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.789907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.479 [2024-09-30 21:50:32.789922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.789975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.479 [2024-09-30 21:50:32.789990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.790045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.479 [2024-09-30 21:50:32.790061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.479 #54 NEW cov: 12466 ft: 15294 corp: 30/2350b lim: 100 exec/s: 54 rss: 74Mb L: 86/100 MS: 1 PersAutoDict- DE: "\011\000\000\000"- 00:06:48.479 [2024-09-30 21:50:32.829904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.479 [2024-09-30 21:50:32.829931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.829989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.479 [2024-09-30 21:50:32.830005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.830063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.479 [2024-09-30 21:50:32.830077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.479 [2024-09-30 21:50:32.830132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.479 [2024-09-30 21:50:32.830148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.738 #55 NEW cov: 12466 ft: 15310 corp: 31/2431b lim: 100 exec/s: 55 rss: 74Mb L: 81/100 MS: 1 ChangeBit- 00:06:48.738 [2024-09-30 21:50:32.890099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.738 [2024-09-30 21:50:32.890126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.738 [2024-09-30 21:50:32.890178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.738 [2024-09-30 21:50:32.890194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.738 [2024-09-30 21:50:32.890244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.738 [2024-09-30 21:50:32.890258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.738 [2024-09-30 21:50:32.890331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.738 [2024-09-30 21:50:32.890347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.738 #56 NEW cov: 12466 ft: 15319 corp: 32/2513b lim: 100 exec/s: 56 rss: 74Mb L: 82/100 MS: 1 ShuffleBytes- 00:06:48.738 [2024-09-30 21:50:32.930221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.738 [2024-09-30 21:50:32.930249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.738 [2024-09-30 21:50:32.930304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.738 [2024-09-30 21:50:32.930325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.738 [2024-09-30 21:50:32.930379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.738 [2024-09-30 21:50:32.930394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.738 [2024-09-30 21:50:32.930451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.738 [2024-09-30 21:50:32.930466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.738 #57 NEW cov: 12466 ft: 15367 corp: 33/2610b lim: 100 exec/s: 57 rss: 74Mb L: 97/100 MS: 1 ShuffleBytes- 00:06:48.738 [2024-09-30 21:50:32.990420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.739 [2024-09-30 21:50:32.990448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.739 [2024-09-30 21:50:32.990501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.739 [2024-09-30 21:50:32.990517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.739 [2024-09-30 21:50:32.990569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.739 [2024-09-30 21:50:32.990584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.739 [2024-09-30 21:50:32.990639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.739 [2024-09-30 21:50:32.990654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.739 #58 NEW cov: 12466 ft: 15396 corp: 34/2692b lim: 100 exec/s: 58 rss: 74Mb L: 82/100 MS: 1 ShuffleBytes- 00:06:48.739 [2024-09-30 21:50:33.050550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.739 [2024-09-30 21:50:33.050577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.739 [2024-09-30 21:50:33.050630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.739 [2024-09-30 21:50:33.050644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.739 [2024-09-30 21:50:33.050701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.739 [2024-09-30 21:50:33.050714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.739 [2024-09-30 21:50:33.050771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:06:48.739 [2024-09-30 21:50:33.050785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:48.739 #59 NEW cov: 12466 ft: 15402 corp: 35/2786b lim: 100 exec/s: 59 rss: 74Mb L: 94/100 MS: 1 CMP- DE: "\335u\220\005\371=f\000"- 00:06:48.998 [2024-09-30 21:50:33.110637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:06:48.998 [2024-09-30 21:50:33.110663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:48.998 [2024-09-30 21:50:33.110711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:06:48.998 [2024-09-30 21:50:33.110727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:48.998 [2024-09-30 21:50:33.110785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:06:48.998 [2024-09-30 21:50:33.110801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:48.999 #60 NEW cov: 12466 ft: 15631 corp: 36/2865b lim: 100 exec/s: 30 rss: 74Mb L: 79/100 MS: 1 EraseBytes- 00:06:48.999 #60 DONE cov: 12466 ft: 15631 corp: 36/2865b lim: 100 exec/s: 30 rss: 74Mb 00:06:48.999 ###### Recommended dictionary. ###### 00:06:48.999 "\011\000\000\000" # Uses: 3 00:06:48.999 "\335u\220\005\371=f\000" # Uses: 0 00:06:48.999 ###### End of recommended dictionary. ###### 00:06:48.999 Done 60 runs in 2 second(s) 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:48.999 21:50:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:06:48.999 [2024-09-30 21:50:33.323313] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:48.999 [2024-09-30 21:50:33.323389] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052832 ] 00:06:49.258 [2024-09-30 21:50:33.505030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.258 [2024-09-30 21:50:33.571707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.592 [2024-09-30 21:50:33.630771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.592 [2024-09-30 21:50:33.647152] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:06:49.592 INFO: Running with entropic power schedule (0xFF, 100). 00:06:49.592 INFO: Seed: 3905492348 00:06:49.592 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:49.592 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:49.592 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:06:49.592 INFO: A corpus is not provided, starting from an empty corpus 00:06:49.592 #2 INITED exec/s: 0 rss: 65Mb 00:06:49.592 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:49.592 This may also happen if the target rejected all inputs we tried so far 00:06:49.592 [2024-09-30 21:50:33.713544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:49.592 [2024-09-30 21:50:33.713587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.592 [2024-09-30 21:50:33.713710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:49.592 [2024-09-30 21:50:33.713736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.592 [2024-09-30 21:50:33.713856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:49.592 [2024-09-30 21:50:33.713884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.900 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:06:49.900 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:49.900 #19 NEW cov: 12217 ft: 12207 corp: 2/39b lim: 50 exec/s: 0 rss: 73Mb L: 38/38 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:49.900 [2024-09-30 21:50:34.054609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:49.900 [2024-09-30 21:50:34.054656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.054782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.054810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.054929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.054961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.055077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.055110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.055233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:49.900 [2024-09-30 21:50:34.055257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:49.900 #30 NEW cov: 12330 ft: 13126 corp: 3/89b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:06:49.900 [2024-09-30 21:50:34.114583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:49.900 [2024-09-30 21:50:34.114618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.114691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.114716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.114822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.114843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.114947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.114971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.115085] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:49.900 [2024-09-30 21:50:34.115106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:49.900 #36 NEW cov: 12336 ft: 13402 corp: 4/139b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ShuffleBytes- 00:06:49.900 [2024-09-30 21:50:34.184848] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:49.900 [2024-09-30 21:50:34.184884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.184960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.184983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.185097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.185121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.185230] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:49.900 [2024-09-30 21:50:34.185253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:49.900 [2024-09-30 21:50:34.185363] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:49.900 [2024-09-30 21:50:34.185388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:49.900 #37 NEW cov: 12421 ft: 13618 corp: 5/189b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CMP- DE: "\377\377\377\377"- 00:06:50.161 [2024-09-30 21:50:34.254689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:50.161 [2024-09-30 21:50:34.254726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.254840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.254861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.254969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.254994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.161 #41 NEW cov: 12421 ft: 13678 corp: 6/228b lim: 50 exec/s: 0 rss: 73Mb L: 39/50 MS: 4 CopyPart-ChangeBit-EraseBytes-CrossOver- 00:06:50.161 [2024-09-30 21:50:34.305207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.161 [2024-09-30 21:50:34.305235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.305305] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.305325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.305415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18389041703483867135 len:65536 00:06:50.161 [2024-09-30 21:50:34.305437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.305545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.305569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.305691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.161 [2024-09-30 21:50:34.305716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.161 #42 NEW cov: 12421 ft: 13721 corp: 7/278b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeBinInt- 00:06:50.161 [2024-09-30 21:50:34.355359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.161 [2024-09-30 21:50:34.355390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.355497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.355522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.355636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:251658240 len:65536 00:06:50.161 [2024-09-30 21:50:34.355656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.355771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.355799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.355916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.161 [2024-09-30 21:50:34.355938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.161 #43 NEW cov: 12421 ft: 13783 corp: 8/328b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CMP- DE: "\017\000\000\000\000\000\000\000"- 00:06:50.161 [2024-09-30 21:50:34.405115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:50.161 [2024-09-30 21:50:34.405148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.405247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.405270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.405394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.405415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.161 #49 NEW cov: 12421 ft: 13818 corp: 9/367b lim: 50 exec/s: 0 rss: 73Mb L: 39/50 MS: 1 ShuffleBytes- 00:06:50.161 [2024-09-30 21:50:34.475339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.161 [2024-09-30 21:50:34.475373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.475440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.475463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.475573] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.475597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.161 #50 NEW cov: 12421 ft: 13863 corp: 10/406b lim: 50 exec/s: 0 rss: 73Mb L: 39/50 MS: 1 InsertByte- 00:06:50.161 [2024-09-30 21:50:34.515449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.161 [2024-09-30 21:50:34.515478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.515554] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.161 [2024-09-30 21:50:34.515572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.161 [2024-09-30 21:50:34.515686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:251658240 len:65536 00:06:50.161 [2024-09-30 21:50:34.515706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.421 #51 NEW cov: 12421 ft: 13909 corp: 11/443b lim: 50 exec/s: 0 rss: 74Mb L: 37/50 MS: 1 EraseBytes- 00:06:50.421 [2024-09-30 21:50:34.585640] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.421 [2024-09-30 21:50:34.585674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.585778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.421 [2024-09-30 21:50:34.585798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.585917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446743042917400575 len:1 00:06:50.421 [2024-09-30 21:50:34.585939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.421 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:50.421 #52 NEW cov: 12444 ft: 14008 corp: 12/481b lim: 50 exec/s: 0 rss: 74Mb L: 38/50 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:06:50.421 [2024-09-30 21:50:34.625909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:771751936 len:1 00:06:50.421 [2024-09-30 21:50:34.625941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.626013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:06:50.421 [2024-09-30 21:50:34.626036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.626153] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:1 00:06:50.421 [2024-09-30 21:50:34.626172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.626282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:06:50.421 [2024-09-30 21:50:34.626305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.421 #54 NEW cov: 12444 ft: 14051 corp: 13/527b lim: 50 exec/s: 0 rss: 74Mb L: 46/50 MS: 2 InsertByte-InsertRepeatedBytes- 00:06:50.421 [2024-09-30 21:50:34.675709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:50.421 [2024-09-30 21:50:34.675742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.675849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4294913536 len:1 00:06:50.421 [2024-09-30 21:50:34.675874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.421 #55 NEW cov: 12444 ft: 14467 corp: 14/554b lim: 50 exec/s: 55 rss: 74Mb L: 27/50 MS: 1 CrossOver- 00:06:50.421 [2024-09-30 21:50:34.726478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743042917400575 len:65536 00:06:50.421 [2024-09-30 21:50:34.726509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.726589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.421 [2024-09-30 21:50:34.726608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.726715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.421 [2024-09-30 21:50:34.726738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.726852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.421 [2024-09-30 21:50:34.726879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.726991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.421 [2024-09-30 21:50:34.727015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.421 #56 NEW cov: 12444 ft: 14545 corp: 15/604b lim: 50 exec/s: 56 rss: 74Mb L: 50/50 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\017"- 00:06:50.421 [2024-09-30 21:50:34.776207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:50.421 [2024-09-30 21:50:34.776240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.776335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18302628885633695743 len:65536 00:06:50.421 [2024-09-30 21:50:34.776363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.421 [2024-09-30 21:50:34.776472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.421 [2024-09-30 21:50:34.776498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.680 #57 NEW cov: 12444 ft: 14645 corp: 16/643b lim: 50 exec/s: 57 rss: 74Mb L: 39/50 MS: 1 ChangeBinInt- 00:06:50.680 [2024-09-30 21:50:34.846364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.680 [2024-09-30 21:50:34.846394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.846464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.680 [2024-09-30 21:50:34.846488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.846588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18389041703483867135 len:65536 00:06:50.680 [2024-09-30 21:50:34.846607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.680 #58 NEW cov: 12444 ft: 14720 corp: 17/678b lim: 50 exec/s: 58 rss: 74Mb L: 35/50 MS: 1 EraseBytes- 00:06:50.680 [2024-09-30 21:50:34.906535] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:50.680 [2024-09-30 21:50:34.906566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.906646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.680 [2024-09-30 21:50:34.906668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.906782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.680 [2024-09-30 21:50:34.906806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.680 #59 NEW cov: 12444 ft: 14749 corp: 18/717b lim: 50 exec/s: 59 rss: 74Mb L: 39/50 MS: 1 ChangeASCIIInt- 00:06:50.680 [2024-09-30 21:50:34.957060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:50.680 [2024-09-30 21:50:34.957094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.957175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.680 [2024-09-30 21:50:34.957193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.957302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18376950374113214463 len:65536 00:06:50.680 [2024-09-30 21:50:34.957330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.957456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.680 [2024-09-30 21:50:34.957479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.680 [2024-09-30 21:50:34.957594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.680 [2024-09-30 21:50:34.957615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.680 #60 NEW cov: 12444 ft: 14767 corp: 19/767b lim: 50 exec/s: 60 rss: 74Mb L: 50/50 MS: 1 CrossOver- 00:06:50.680 [2024-09-30 21:50:34.997167] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743042917400575 len:65536 00:06:50.680 [2024-09-30 21:50:34.997197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.681 [2024-09-30 21:50:34.997280] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.681 [2024-09-30 21:50:34.997304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.681 [2024-09-30 21:50:34.997420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.681 [2024-09-30 21:50:34.997440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.681 [2024-09-30 21:50:34.997548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.681 [2024-09-30 21:50:34.997569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.681 [2024-09-30 21:50:34.997680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.681 [2024-09-30 21:50:34.997704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.681 #61 NEW cov: 12444 ft: 14838 corp: 20/817b lim: 50 exec/s: 61 rss: 74Mb L: 50/50 MS: 1 CrossOver- 00:06:50.940 [2024-09-30 21:50:35.067086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.067122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.067239] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.067264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.067387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.067413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.940 #62 NEW cov: 12444 ft: 14882 corp: 21/851b lim: 50 exec/s: 62 rss: 74Mb L: 34/50 MS: 1 EraseBytes- 00:06:50.940 [2024-09-30 21:50:35.107164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:50.940 [2024-09-30 21:50:35.107197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.107295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.107316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.107411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.107434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.940 #63 NEW cov: 12444 ft: 14891 corp: 22/890b lim: 50 exec/s: 63 rss: 74Mb L: 39/50 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\017"- 00:06:50.940 [2024-09-30 21:50:35.157730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743042917400575 len:65536 00:06:50.940 [2024-09-30 21:50:35.157764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.157852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.157873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.157978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.157998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.158107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.158134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.158246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.940 [2024-09-30 21:50:35.158271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.940 #64 NEW cov: 12444 ft: 14917 corp: 23/940b lim: 50 exec/s: 64 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:06:50.940 [2024-09-30 21:50:35.207172] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:792633534417207295 len:65291 00:06:50.940 [2024-09-30 21:50:35.207205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.940 #68 NEW cov: 12444 ft: 15227 corp: 24/954b lim: 50 exec/s: 68 rss: 74Mb L: 14/50 MS: 4 ShuffleBytes-CrossOver-ShuffleBytes-CopyPart- 00:06:50.940 [2024-09-30 21:50:35.258093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3170533106876678143 len:65536 00:06:50.940 [2024-09-30 21:50:35.258126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.258209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.258231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.258335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.258356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.258460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:50.940 [2024-09-30 21:50:35.258485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:50.940 [2024-09-30 21:50:35.258588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:50.940 [2024-09-30 21:50:35.258608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:50.940 #69 NEW cov: 12444 ft: 15250 corp: 25/1004b lim: 50 exec/s: 69 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:06:51.199 [2024-09-30 21:50:35.328369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:51.200 [2024-09-30 21:50:35.328402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.328484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.328505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.328615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:251658240 len:65536 00:06:51.200 [2024-09-30 21:50:35.328639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.328750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.328775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.328886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:51.200 [2024-09-30 21:50:35.328906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.200 #70 NEW cov: 12444 ft: 15260 corp: 26/1054b lim: 50 exec/s: 70 rss: 74Mb L: 50/50 MS: 1 ShuffleBytes- 00:06:51.200 [2024-09-30 21:50:35.378474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743042917400575 len:65536 00:06:51.200 [2024-09-30 21:50:35.378506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.378593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.378612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.378709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.378730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.378842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.378869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.378980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446589042570035199 len:65330 00:06:51.200 [2024-09-30 21:50:35.379003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.200 #71 NEW cov: 12444 ft: 15276 corp: 27/1104b lim: 50 exec/s: 71 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:06:51.200 [2024-09-30 21:50:35.428619] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743042917400575 len:65536 00:06:51.200 [2024-09-30 21:50:35.428649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.428701] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.428725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.428838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65528 00:06:51.200 [2024-09-30 21:50:35.428862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.428971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.428995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.429104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:51.200 [2024-09-30 21:50:35.429122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.200 #72 NEW cov: 12444 ft: 15291 corp: 28/1154b lim: 50 exec/s: 72 rss: 74Mb L: 50/50 MS: 1 ChangeBit- 00:06:51.200 [2024-09-30 21:50:35.478346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:51.200 [2024-09-30 21:50:35.478381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.478459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.478479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.478595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.478612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.200 #73 NEW cov: 12444 ft: 15313 corp: 29/1193b lim: 50 exec/s: 73 rss: 74Mb L: 39/50 MS: 1 CopyPart- 00:06:51.200 [2024-09-30 21:50:35.528903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743042917400575 len:65536 00:06:51.200 [2024-09-30 21:50:35.528936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.529016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.529041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.529157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709355007 len:65536 00:06:51.200 [2024-09-30 21:50:35.529186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.529300] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:51.200 [2024-09-30 21:50:35.529331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.200 [2024-09-30 21:50:35.529445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65330 00:06:51.200 [2024-09-30 21:50:35.529469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:51.459 #74 NEW cov: 12444 ft: 15320 corp: 30/1243b lim: 50 exec/s: 74 rss: 74Mb L: 50/50 MS: 1 ChangeByte- 00:06:51.459 [2024-09-30 21:50:35.598763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 00:06:51.459 [2024-09-30 21:50:35.598794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.459 [2024-09-30 21:50:35.598838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.459 [2024-09-30 21:50:35.598854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.459 [2024-09-30 21:50:35.598968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:51.459 [2024-09-30 21:50:35.598990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.460 #75 NEW cov: 12444 ft: 15339 corp: 31/1282b lim: 50 exec/s: 75 rss: 74Mb L: 39/50 MS: 1 ChangeASCIIInt- 00:06:51.460 [2024-09-30 21:50:35.669091] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069549522943 len:65536 00:06:51.460 [2024-09-30 21:50:35.669121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.460 [2024-09-30 21:50:35.669192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:06:51.460 [2024-09-30 21:50:35.669212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.460 [2024-09-30 21:50:35.669315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:06:51.460 [2024-09-30 21:50:35.669334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.460 [2024-09-30 21:50:35.669445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:06:51.460 [2024-09-30 21:50:35.669467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:51.460 #76 NEW cov: 12444 ft: 15343 corp: 32/1325b lim: 50 exec/s: 38 rss: 75Mb L: 43/50 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:06:51.460 #76 DONE cov: 12444 ft: 15343 corp: 32/1325b lim: 50 exec/s: 38 rss: 75Mb 00:06:51.460 ###### Recommended dictionary. ###### 00:06:51.460 "\377\377\377\377" # Uses: 2 00:06:51.460 "\017\000\000\000\000\000\000\000" # Uses: 1 00:06:51.460 "\377\377\377\377\377\377\377\017" # Uses: 1 00:06:51.460 ###### End of recommended dictionary. ###### 00:06:51.460 Done 76 runs in 2 second(s) 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:51.719 21:50:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:06:51.719 [2024-09-30 21:50:35.879940] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:51.719 [2024-09-30 21:50:35.880010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053365 ] 00:06:51.719 [2024-09-30 21:50:36.054141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.978 [2024-09-30 21:50:36.120112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.978 [2024-09-30 21:50:36.178854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.978 [2024-09-30 21:50:36.195183] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:51.978 INFO: Running with entropic power schedule (0xFF, 100). 00:06:51.978 INFO: Seed: 2157512842 00:06:51.978 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:51.978 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:51.978 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:06:51.978 INFO: A corpus is not provided, starting from an empty corpus 00:06:51.978 #2 INITED exec/s: 0 rss: 65Mb 00:06:51.978 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:51.978 This may also happen if the target rejected all inputs we tried so far 00:06:51.978 [2024-09-30 21:50:36.240906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:51.978 [2024-09-30 21:50:36.240936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:51.978 [2024-09-30 21:50:36.240973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:51.978 [2024-09-30 21:50:36.240990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:51.978 [2024-09-30 21:50:36.241048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:51.978 [2024-09-30 21:50:36.241064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:51.978 [2024-09-30 21:50:36.241117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:51.978 [2024-09-30 21:50:36.241133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.237 NEW_FUNC[1/715]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:06:52.237 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:52.237 #5 NEW cov: 12271 ft: 12266 corp: 2/74b lim: 90 exec/s: 0 rss: 73Mb L: 73/73 MS: 3 ChangeByte-ChangeBinInt-InsertRepeatedBytes- 00:06:52.237 [2024-09-30 21:50:36.571718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.237 [2024-09-30 21:50:36.571760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.237 [2024-09-30 21:50:36.571825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.237 [2024-09-30 21:50:36.571847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.237 [2024-09-30 21:50:36.571908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.237 [2024-09-30 21:50:36.571928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.237 [2024-09-30 21:50:36.571990] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.237 [2024-09-30 21:50:36.572010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.496 NEW_FUNC[1/1]: 0x19acf48 in nvme_tcp_qpair /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_tcp.c:183 00:06:52.496 #6 NEW cov: 12388 ft: 12833 corp: 3/160b lim: 90 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 CopyPart- 00:06:52.496 [2024-09-30 21:50:36.631781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.496 [2024-09-30 21:50:36.631809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.631854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.496 [2024-09-30 21:50:36.631870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.631921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.496 [2024-09-30 21:50:36.631937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.631991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.496 [2024-09-30 21:50:36.632005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.496 #7 NEW cov: 12394 ft: 13135 corp: 4/246b lim: 90 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 ShuffleBytes- 00:06:52.496 [2024-09-30 21:50:36.691896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.496 [2024-09-30 21:50:36.691926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.691962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.496 [2024-09-30 21:50:36.691982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.692034] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.496 [2024-09-30 21:50:36.692050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.692102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.496 [2024-09-30 21:50:36.692119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.496 #13 NEW cov: 12479 ft: 13375 corp: 5/319b lim: 90 exec/s: 0 rss: 73Mb L: 73/86 MS: 1 ChangeBit- 00:06:52.496 [2024-09-30 21:50:36.732039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.496 [2024-09-30 21:50:36.732067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.732112] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.496 [2024-09-30 21:50:36.732128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.732178] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.496 [2024-09-30 21:50:36.732193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.732247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.496 [2024-09-30 21:50:36.732262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.496 #14 NEW cov: 12479 ft: 13494 corp: 6/405b lim: 90 exec/s: 0 rss: 73Mb L: 86/86 MS: 1 ChangeBit- 00:06:52.496 [2024-09-30 21:50:36.772141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.496 [2024-09-30 21:50:36.772168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.772209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.496 [2024-09-30 21:50:36.772226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.772279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.496 [2024-09-30 21:50:36.772295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.772353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.496 [2024-09-30 21:50:36.772369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.496 #15 NEW cov: 12479 ft: 13565 corp: 7/488b lim: 90 exec/s: 0 rss: 73Mb L: 83/86 MS: 1 InsertRepeatedBytes- 00:06:52.496 [2024-09-30 21:50:36.812301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.496 [2024-09-30 21:50:36.812334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.812383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.496 [2024-09-30 21:50:36.812400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.496 [2024-09-30 21:50:36.812454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.497 [2024-09-30 21:50:36.812474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.497 [2024-09-30 21:50:36.812528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.497 [2024-09-30 21:50:36.812545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.497 #16 NEW cov: 12479 ft: 13608 corp: 8/575b lim: 90 exec/s: 0 rss: 73Mb L: 87/87 MS: 1 InsertByte- 00:06:52.755 [2024-09-30 21:50:36.872444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.755 [2024-09-30 21:50:36.872473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.872522] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.755 [2024-09-30 21:50:36.872539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.872589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.755 [2024-09-30 21:50:36.872605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.872658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.755 [2024-09-30 21:50:36.872674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.755 #17 NEW cov: 12479 ft: 13648 corp: 9/648b lim: 90 exec/s: 0 rss: 74Mb L: 73/87 MS: 1 ShuffleBytes- 00:06:52.755 [2024-09-30 21:50:36.932315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.755 [2024-09-30 21:50:36.932342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.932377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.755 [2024-09-30 21:50:36.932393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.755 #18 NEW cov: 12479 ft: 14155 corp: 10/698b lim: 90 exec/s: 0 rss: 74Mb L: 50/87 MS: 1 EraseBytes- 00:06:52.755 [2024-09-30 21:50:36.992719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.755 [2024-09-30 21:50:36.992747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.992794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.755 [2024-09-30 21:50:36.992810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.992862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.755 [2024-09-30 21:50:36.992878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:36.992931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.755 [2024-09-30 21:50:36.992947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.755 #19 NEW cov: 12479 ft: 14206 corp: 11/771b lim: 90 exec/s: 0 rss: 74Mb L: 73/87 MS: 1 ChangeByte- 00:06:52.755 [2024-09-30 21:50:37.052921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.755 [2024-09-30 21:50:37.052948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:37.052992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.755 [2024-09-30 21:50:37.053008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:37.053061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.755 [2024-09-30 21:50:37.053076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:37.053129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.755 [2024-09-30 21:50:37.053145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.755 #20 NEW cov: 12479 ft: 14221 corp: 12/857b lim: 90 exec/s: 0 rss: 74Mb L: 86/87 MS: 1 ChangeBinInt- 00:06:52.755 [2024-09-30 21:50:37.093018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:52.755 [2024-09-30 21:50:37.093045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:37.093094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:52.755 [2024-09-30 21:50:37.093110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:37.093163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:52.755 [2024-09-30 21:50:37.093179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:52.755 [2024-09-30 21:50:37.093234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:52.755 [2024-09-30 21:50:37.093250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:52.755 #21 NEW cov: 12479 ft: 14311 corp: 13/944b lim: 90 exec/s: 0 rss: 74Mb L: 87/87 MS: 1 InsertByte- 00:06:53.013 [2024-09-30 21:50:37.133158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.013 [2024-09-30 21:50:37.133186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.013 [2024-09-30 21:50:37.133237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.013 [2024-09-30 21:50:37.133251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.013 [2024-09-30 21:50:37.133303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.013 [2024-09-30 21:50:37.133324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.013 [2024-09-30 21:50:37.133376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.014 [2024-09-30 21:50:37.133392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.014 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:53.014 #22 NEW cov: 12502 ft: 14364 corp: 14/1027b lim: 90 exec/s: 0 rss: 74Mb L: 83/87 MS: 1 ChangeBit- 00:06:53.014 [2024-09-30 21:50:37.193013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.014 [2024-09-30 21:50:37.193042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.193091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.014 [2024-09-30 21:50:37.193111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.014 #23 NEW cov: 12502 ft: 14383 corp: 15/1077b lim: 90 exec/s: 0 rss: 74Mb L: 50/87 MS: 1 InsertRepeatedBytes- 00:06:53.014 [2024-09-30 21:50:37.233454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.014 [2024-09-30 21:50:37.233482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.233529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.014 [2024-09-30 21:50:37.233545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.233597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.014 [2024-09-30 21:50:37.233613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.233665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.014 [2024-09-30 21:50:37.233681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.014 #24 NEW cov: 12502 ft: 14422 corp: 16/1163b lim: 90 exec/s: 24 rss: 74Mb L: 86/87 MS: 1 CopyPart- 00:06:53.014 [2024-09-30 21:50:37.273532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.014 [2024-09-30 21:50:37.273559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.273612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.014 [2024-09-30 21:50:37.273628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.273678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.014 [2024-09-30 21:50:37.273693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.273746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.014 [2024-09-30 21:50:37.273762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.014 #25 NEW cov: 12502 ft: 14431 corp: 17/1250b lim: 90 exec/s: 25 rss: 74Mb L: 87/87 MS: 1 InsertByte- 00:06:53.014 [2024-09-30 21:50:37.333662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.014 [2024-09-30 21:50:37.333689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.333743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.014 [2024-09-30 21:50:37.333760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.333812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.014 [2024-09-30 21:50:37.333828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.014 [2024-09-30 21:50:37.333883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.014 [2024-09-30 21:50:37.333900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.014 #26 NEW cov: 12502 ft: 14444 corp: 18/1333b lim: 90 exec/s: 26 rss: 74Mb L: 83/87 MS: 1 CrossOver- 00:06:53.273 [2024-09-30 21:50:37.393561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.273 [2024-09-30 21:50:37.393587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.393621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.273 [2024-09-30 21:50:37.393635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.273 #27 NEW cov: 12502 ft: 14525 corp: 19/1383b lim: 90 exec/s: 27 rss: 74Mb L: 50/87 MS: 1 CopyPart- 00:06:53.273 [2024-09-30 21:50:37.454044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.273 [2024-09-30 21:50:37.454071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.454121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.273 [2024-09-30 21:50:37.454137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.454188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.273 [2024-09-30 21:50:37.454204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.454256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.273 [2024-09-30 21:50:37.454272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.273 #28 NEW cov: 12502 ft: 14536 corp: 20/1470b lim: 90 exec/s: 28 rss: 74Mb L: 87/87 MS: 1 ChangeBinInt- 00:06:53.273 [2024-09-30 21:50:37.514185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.273 [2024-09-30 21:50:37.514212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.514254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.273 [2024-09-30 21:50:37.514270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.514326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.273 [2024-09-30 21:50:37.514340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.514393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.273 [2024-09-30 21:50:37.514409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.273 #29 NEW cov: 12502 ft: 14545 corp: 21/1543b lim: 90 exec/s: 29 rss: 74Mb L: 73/87 MS: 1 ChangeBit- 00:06:53.273 [2024-09-30 21:50:37.574390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.273 [2024-09-30 21:50:37.574417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.273 [2024-09-30 21:50:37.574464] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.274 [2024-09-30 21:50:37.574480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.274 [2024-09-30 21:50:37.574533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.274 [2024-09-30 21:50:37.574553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.274 [2024-09-30 21:50:37.574605] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.274 [2024-09-30 21:50:37.574622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.274 #30 NEW cov: 12502 ft: 14549 corp: 22/1629b lim: 90 exec/s: 30 rss: 74Mb L: 86/87 MS: 1 CMP- DE: "\000\000\177R\264\022\306u"- 00:06:53.274 [2024-09-30 21:50:37.614470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.274 [2024-09-30 21:50:37.614498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.274 [2024-09-30 21:50:37.614551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.274 [2024-09-30 21:50:37.614567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.274 [2024-09-30 21:50:37.614620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.274 [2024-09-30 21:50:37.614635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.274 [2024-09-30 21:50:37.614689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.274 [2024-09-30 21:50:37.614705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.274 #31 NEW cov: 12502 ft: 14580 corp: 23/1716b lim: 90 exec/s: 31 rss: 74Mb L: 87/87 MS: 1 ShuffleBytes- 00:06:53.532 [2024-09-30 21:50:37.654609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.532 [2024-09-30 21:50:37.654637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.532 [2024-09-30 21:50:37.654693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.532 [2024-09-30 21:50:37.654716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.532 [2024-09-30 21:50:37.654785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.532 [2024-09-30 21:50:37.654809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.532 [2024-09-30 21:50:37.654881] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.532 [2024-09-30 21:50:37.654903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.532 #32 NEW cov: 12502 ft: 14585 corp: 24/1799b lim: 90 exec/s: 32 rss: 75Mb L: 83/87 MS: 1 ChangeBit- 00:06:53.532 [2024-09-30 21:50:37.714763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.532 [2024-09-30 21:50:37.714790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.532 [2024-09-30 21:50:37.714840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.532 [2024-09-30 21:50:37.714855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.532 [2024-09-30 21:50:37.714906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.533 [2024-09-30 21:50:37.714923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.714974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.533 [2024-09-30 21:50:37.714994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.533 #33 NEW cov: 12502 ft: 14643 corp: 25/1882b lim: 90 exec/s: 33 rss: 75Mb L: 83/87 MS: 1 ChangeByte- 00:06:53.533 [2024-09-30 21:50:37.754873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.533 [2024-09-30 21:50:37.754900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.754951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.533 [2024-09-30 21:50:37.754967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.755019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.533 [2024-09-30 21:50:37.755035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.755089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.533 [2024-09-30 21:50:37.755105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.533 #34 NEW cov: 12502 ft: 14659 corp: 26/1969b lim: 90 exec/s: 34 rss: 75Mb L: 87/87 MS: 1 InsertByte- 00:06:53.533 [2024-09-30 21:50:37.794688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.533 [2024-09-30 21:50:37.794716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.794771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.533 [2024-09-30 21:50:37.794788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.533 #35 NEW cov: 12502 ft: 14681 corp: 27/2019b lim: 90 exec/s: 35 rss: 75Mb L: 50/87 MS: 1 EraseBytes- 00:06:53.533 [2024-09-30 21:50:37.855135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.533 [2024-09-30 21:50:37.855162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.855209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.533 [2024-09-30 21:50:37.855226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.855278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.533 [2024-09-30 21:50:37.855294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.855352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.533 [2024-09-30 21:50:37.855366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.533 #36 NEW cov: 12502 ft: 14693 corp: 28/2102b lim: 90 exec/s: 36 rss: 75Mb L: 83/87 MS: 1 ShuffleBytes- 00:06:53.533 [2024-09-30 21:50:37.895216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.533 [2024-09-30 21:50:37.895243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.895288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.533 [2024-09-30 21:50:37.895303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.895366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.533 [2024-09-30 21:50:37.895382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.533 [2024-09-30 21:50:37.895438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.533 [2024-09-30 21:50:37.895453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.792 #37 NEW cov: 12502 ft: 14712 corp: 29/2189b lim: 90 exec/s: 37 rss: 75Mb L: 87/87 MS: 1 CopyPart- 00:06:53.792 [2024-09-30 21:50:37.955570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.792 [2024-09-30 21:50:37.955599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:37.955652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.792 [2024-09-30 21:50:37.955667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:37.955719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.792 [2024-09-30 21:50:37.955733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:37.955785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.792 [2024-09-30 21:50:37.955801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:37.955854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:06:53.792 [2024-09-30 21:50:37.955869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:53.792 #38 NEW cov: 12502 ft: 14762 corp: 30/2279b lim: 90 exec/s: 38 rss: 75Mb L: 90/90 MS: 1 CopyPart- 00:06:53.792 [2024-09-30 21:50:37.995231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.792 [2024-09-30 21:50:37.995260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:37.995301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.792 [2024-09-30 21:50:37.995322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.792 #39 NEW cov: 12502 ft: 14780 corp: 31/2329b lim: 90 exec/s: 39 rss: 75Mb L: 50/90 MS: 1 EraseBytes- 00:06:53.792 [2024-09-30 21:50:38.055672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.792 [2024-09-30 21:50:38.055699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.055745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.792 [2024-09-30 21:50:38.055761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.055814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.792 [2024-09-30 21:50:38.055830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.055884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.792 [2024-09-30 21:50:38.055900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.792 #40 NEW cov: 12502 ft: 14848 corp: 32/2416b lim: 90 exec/s: 40 rss: 75Mb L: 87/90 MS: 1 ChangeByte- 00:06:53.792 [2024-09-30 21:50:38.095674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.792 [2024-09-30 21:50:38.095702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.095738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.792 [2024-09-30 21:50:38.095755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.095810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.792 [2024-09-30 21:50:38.095826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.792 #41 NEW cov: 12502 ft: 15117 corp: 33/2473b lim: 90 exec/s: 41 rss: 75Mb L: 57/90 MS: 1 CrossOver- 00:06:53.792 [2024-09-30 21:50:38.135905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:53.792 [2024-09-30 21:50:38.135933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.135980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:53.792 [2024-09-30 21:50:38.135997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.136048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:53.792 [2024-09-30 21:50:38.136063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:53.792 [2024-09-30 21:50:38.136115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:53.792 [2024-09-30 21:50:38.136131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:53.792 #42 NEW cov: 12502 ft: 15132 corp: 34/2560b lim: 90 exec/s: 42 rss: 75Mb L: 87/90 MS: 1 ChangeBit- 00:06:54.051 [2024-09-30 21:50:38.176013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.051 [2024-09-30 21:50:38.176040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.051 [2024-09-30 21:50:38.176085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.051 [2024-09-30 21:50:38.176101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.051 [2024-09-30 21:50:38.176155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:06:54.051 [2024-09-30 21:50:38.176171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.051 [2024-09-30 21:50:38.176225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:06:54.051 [2024-09-30 21:50:38.176241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:54.051 #43 NEW cov: 12502 ft: 15139 corp: 35/2646b lim: 90 exec/s: 43 rss: 75Mb L: 86/90 MS: 1 CrossOver- 00:06:54.051 [2024-09-30 21:50:38.215814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:06:54.051 [2024-09-30 21:50:38.215841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.051 [2024-09-30 21:50:38.215876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:06:54.051 [2024-09-30 21:50:38.215892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.051 #44 NEW cov: 12502 ft: 15181 corp: 36/2692b lim: 90 exec/s: 22 rss: 75Mb L: 46/90 MS: 1 EraseBytes- 00:06:54.051 #44 DONE cov: 12502 ft: 15181 corp: 36/2692b lim: 90 exec/s: 22 rss: 75Mb 00:06:54.051 ###### Recommended dictionary. ###### 00:06:54.052 "\000\000\177R\264\022\306u" # Uses: 0 00:06:54.052 ###### End of recommended dictionary. ###### 00:06:54.052 Done 44 runs in 2 second(s) 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:54.052 21:50:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:06:54.052 [2024-09-30 21:50:38.407108] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:54.052 [2024-09-30 21:50:38.407181] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053873 ] 00:06:54.311 [2024-09-30 21:50:38.586058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.311 [2024-09-30 21:50:38.651207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.570 [2024-09-30 21:50:38.710418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.570 [2024-09-30 21:50:38.726779] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:06:54.570 INFO: Running with entropic power schedule (0xFF, 100). 00:06:54.570 INFO: Seed: 395539826 00:06:54.570 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:54.570 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:54.570 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:06:54.570 INFO: A corpus is not provided, starting from an empty corpus 00:06:54.570 #2 INITED exec/s: 0 rss: 65Mb 00:06:54.570 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:54.570 This may also happen if the target rejected all inputs we tried so far 00:06:54.570 [2024-09-30 21:50:38.771582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:54.570 [2024-09-30 21:50:38.771615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.570 [2024-09-30 21:50:38.771665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:54.570 [2024-09-30 21:50:38.771683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.570 [2024-09-30 21:50:38.771713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:54.570 [2024-09-30 21:50:38.771729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:54.830 NEW_FUNC[1/715]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:06:54.830 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:54.830 #7 NEW cov: 12249 ft: 12245 corp: 2/36b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 5 InsertByte-EraseBytes-CMP-ChangeByte-InsertRepeatedBytes- DE: "\027\000\000\000\000\000\000\000"- 00:06:54.830 [2024-09-30 21:50:39.122329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:54.830 [2024-09-30 21:50:39.122366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.830 NEW_FUNC[1/1]: 0xfb3008 in rte_rdtsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:31 00:06:54.830 #17 NEW cov: 12363 ft: 13658 corp: 3/51b lim: 50 exec/s: 0 rss: 73Mb L: 15/35 MS: 5 ChangeByte-ChangeByte-PersAutoDict-ChangeByte-CopyPart- DE: "\027\000\000\000\000\000\000\000"- 00:06:54.830 [2024-09-30 21:50:39.182498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:54.830 [2024-09-30 21:50:39.182528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:54.830 [2024-09-30 21:50:39.182577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:54.830 [2024-09-30 21:50:39.182600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:54.830 [2024-09-30 21:50:39.182630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:54.830 [2024-09-30 21:50:39.182646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.089 #18 NEW cov: 12369 ft: 13931 corp: 4/86b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ShuffleBytes- 00:06:55.089 [2024-09-30 21:50:39.272743] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.089 [2024-09-30 21:50:39.272773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.089 [2024-09-30 21:50:39.272821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.089 [2024-09-30 21:50:39.272845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.089 [2024-09-30 21:50:39.272875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.089 [2024-09-30 21:50:39.272891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.089 #19 NEW cov: 12454 ft: 14126 corp: 5/121b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:06:55.089 [2024-09-30 21:50:39.322733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.089 [2024-09-30 21:50:39.322763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.089 #22 NEW cov: 12454 ft: 14245 corp: 6/138b lim: 50 exec/s: 0 rss: 73Mb L: 17/35 MS: 3 CrossOver-ShuffleBytes-CrossOver- 00:06:55.089 [2024-09-30 21:50:39.372967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.089 [2024-09-30 21:50:39.372996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.089 [2024-09-30 21:50:39.373043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.089 [2024-09-30 21:50:39.373067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.089 [2024-09-30 21:50:39.373098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.089 [2024-09-30 21:50:39.373114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.089 #23 NEW cov: 12454 ft: 14354 corp: 7/173b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 CMP- DE: "\005\000\000\000"- 00:06:55.089 [2024-09-30 21:50:39.423141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.089 [2024-09-30 21:50:39.423172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.089 [2024-09-30 21:50:39.423221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.089 [2024-09-30 21:50:39.423243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.089 [2024-09-30 21:50:39.423275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.089 [2024-09-30 21:50:39.423291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.348 #24 NEW cov: 12454 ft: 14428 corp: 8/210b lim: 50 exec/s: 0 rss: 73Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:06:55.348 [2024-09-30 21:50:39.513342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.348 [2024-09-30 21:50:39.513372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.348 [2024-09-30 21:50:39.513419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.348 [2024-09-30 21:50:39.513443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.348 [2024-09-30 21:50:39.513473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.348 [2024-09-30 21:50:39.513490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.348 #25 NEW cov: 12454 ft: 14458 corp: 9/245b lim: 50 exec/s: 0 rss: 73Mb L: 35/37 MS: 1 ChangeBit- 00:06:55.348 [2024-09-30 21:50:39.603584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.348 [2024-09-30 21:50:39.603613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.348 [2024-09-30 21:50:39.603661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.348 [2024-09-30 21:50:39.603684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.349 [2024-09-30 21:50:39.603720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.349 [2024-09-30 21:50:39.603737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.349 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:55.349 #26 NEW cov: 12477 ft: 14544 corp: 10/281b lim: 50 exec/s: 0 rss: 74Mb L: 36/37 MS: 1 InsertByte- 00:06:55.349 [2024-09-30 21:50:39.693757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.349 [2024-09-30 21:50:39.693786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.349 [2024-09-30 21:50:39.693834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.349 [2024-09-30 21:50:39.693860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.608 #27 NEW cov: 12477 ft: 14817 corp: 11/303b lim: 50 exec/s: 0 rss: 74Mb L: 22/37 MS: 1 EraseBytes- 00:06:55.608 [2024-09-30 21:50:39.754007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.608 [2024-09-30 21:50:39.754037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.608 [2024-09-30 21:50:39.754084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.608 [2024-09-30 21:50:39.754109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.608 [2024-09-30 21:50:39.754140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.608 [2024-09-30 21:50:39.754155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.608 #28 NEW cov: 12477 ft: 14843 corp: 12/338b lim: 50 exec/s: 28 rss: 74Mb L: 35/37 MS: 1 ChangeByte- 00:06:55.608 [2024-09-30 21:50:39.804041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.608 [2024-09-30 21:50:39.804071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.608 [2024-09-30 21:50:39.804119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.608 [2024-09-30 21:50:39.804145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.608 #29 NEW cov: 12477 ft: 14873 corp: 13/360b lim: 50 exec/s: 29 rss: 74Mb L: 22/37 MS: 1 ShuffleBytes- 00:06:55.608 [2024-09-30 21:50:39.894366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.608 [2024-09-30 21:50:39.894396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.608 [2024-09-30 21:50:39.894443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.608 [2024-09-30 21:50:39.894469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.608 [2024-09-30 21:50:39.894498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.608 [2024-09-30 21:50:39.894514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.608 #30 NEW cov: 12477 ft: 14914 corp: 14/396b lim: 50 exec/s: 30 rss: 74Mb L: 36/37 MS: 1 CopyPart- 00:06:55.868 [2024-09-30 21:50:39.984755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.868 [2024-09-30 21:50:39.984786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:39.984837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.868 [2024-09-30 21:50:39.984855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:39.984886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.868 [2024-09-30 21:50:39.984902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:39.984929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:55.868 [2024-09-30 21:50:39.984945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:39.984972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:55.868 [2024-09-30 21:50:39.984988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:55.868 #31 NEW cov: 12477 ft: 15298 corp: 15/446b lim: 50 exec/s: 31 rss: 74Mb L: 50/50 MS: 1 InsertRepeatedBytes- 00:06:55.868 [2024-09-30 21:50:40.075060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.868 [2024-09-30 21:50:40.075103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:40.075153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.868 [2024-09-30 21:50:40.075173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:40.075202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.868 [2024-09-30 21:50:40.075220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:40.075249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:06:55.868 [2024-09-30 21:50:40.075265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:40.075294] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:06:55.868 [2024-09-30 21:50:40.075316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:55.868 #32 NEW cov: 12477 ft: 15365 corp: 16/496b lim: 50 exec/s: 32 rss: 74Mb L: 50/50 MS: 1 CopyPart- 00:06:55.868 [2024-09-30 21:50:40.165081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:55.868 [2024-09-30 21:50:40.165114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:40.165162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:55.868 [2024-09-30 21:50:40.165183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:55.868 [2024-09-30 21:50:40.165213] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:55.868 [2024-09-30 21:50:40.165229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:55.868 #33 NEW cov: 12477 ft: 15391 corp: 17/535b lim: 50 exec/s: 33 rss: 74Mb L: 39/50 MS: 1 CMP- DE: "\000@\000\000"- 00:06:56.128 [2024-09-30 21:50:40.255237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.128 [2024-09-30 21:50:40.255266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.128 [2024-09-30 21:50:40.255325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.128 [2024-09-30 21:50:40.255344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.128 #34 NEW cov: 12477 ft: 15419 corp: 18/557b lim: 50 exec/s: 34 rss: 74Mb L: 22/50 MS: 1 EraseBytes- 00:06:56.128 [2024-09-30 21:50:40.315366] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.128 [2024-09-30 21:50:40.315395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.128 [2024-09-30 21:50:40.315443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.128 [2024-09-30 21:50:40.315465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.128 #35 NEW cov: 12477 ft: 15435 corp: 19/579b lim: 50 exec/s: 35 rss: 74Mb L: 22/50 MS: 1 ChangeBit- 00:06:56.128 [2024-09-30 21:50:40.375584] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.128 [2024-09-30 21:50:40.375614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.128 [2024-09-30 21:50:40.375645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.128 [2024-09-30 21:50:40.375677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.128 [2024-09-30 21:50:40.375715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:56.128 [2024-09-30 21:50:40.375731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.128 #36 NEW cov: 12477 ft: 15500 corp: 20/618b lim: 50 exec/s: 36 rss: 74Mb L: 39/50 MS: 1 CopyPart- 00:06:56.128 [2024-09-30 21:50:40.465852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.128 [2024-09-30 21:50:40.465881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.128 [2024-09-30 21:50:40.465928] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.128 [2024-09-30 21:50:40.465945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.128 [2024-09-30 21:50:40.465974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:56.128 [2024-09-30 21:50:40.465990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.387 #37 NEW cov: 12477 ft: 15536 corp: 21/648b lim: 50 exec/s: 37 rss: 74Mb L: 30/50 MS: 1 PersAutoDict- DE: "\027\000\000\000\000\000\000\000"- 00:06:56.387 [2024-09-30 21:50:40.525977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.387 [2024-09-30 21:50:40.526006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.387 [2024-09-30 21:50:40.526053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.387 [2024-09-30 21:50:40.526070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.387 [2024-09-30 21:50:40.526106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:06:56.387 [2024-09-30 21:50:40.526122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:56.387 #38 NEW cov: 12477 ft: 15555 corp: 22/684b lim: 50 exec/s: 38 rss: 74Mb L: 36/50 MS: 1 InsertByte- 00:06:56.387 [2024-09-30 21:50:40.576049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.387 [2024-09-30 21:50:40.576078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.387 [2024-09-30 21:50:40.576126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.387 [2024-09-30 21:50:40.576143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.387 #39 NEW cov: 12477 ft: 15586 corp: 23/707b lim: 50 exec/s: 39 rss: 74Mb L: 23/50 MS: 1 InsertByte- 00:06:56.387 [2024-09-30 21:50:40.666319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.388 [2024-09-30 21:50:40.666348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.388 [2024-09-30 21:50:40.666395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:06:56.388 [2024-09-30 21:50:40.666413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:56.388 #40 NEW cov: 12477 ft: 15628 corp: 24/730b lim: 50 exec/s: 40 rss: 74Mb L: 23/50 MS: 1 CopyPart- 00:06:56.647 [2024-09-30 21:50:40.756540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:06:56.647 [2024-09-30 21:50:40.756570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:56.647 #44 NEW cov: 12477 ft: 15666 corp: 25/746b lim: 50 exec/s: 22 rss: 75Mb L: 16/50 MS: 4 CrossOver-ChangeBinInt-ChangeByte-CMP- DE: "\001\000\000\000\000\000\000\001"- 00:06:56.647 #44 DONE cov: 12477 ft: 15666 corp: 25/746b lim: 50 exec/s: 22 rss: 75Mb 00:06:56.647 ###### Recommended dictionary. ###### 00:06:56.647 "\027\000\000\000\000\000\000\000" # Uses: 2 00:06:56.647 "\005\000\000\000" # Uses: 0 00:06:56.647 "\000@\000\000" # Uses: 0 00:06:56.647 "\001\000\000\000\000\000\000\001" # Uses: 0 00:06:56.647 ###### End of recommended dictionary. ###### 00:06:56.647 Done 44 runs in 2 second(s) 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:06:56.647 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:56.648 21:50:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:06:56.648 [2024-09-30 21:50:40.996808] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:56.648 [2024-09-30 21:50:40.996881] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054188 ] 00:06:56.907 [2024-09-30 21:50:41.180861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.907 [2024-09-30 21:50:41.249829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.166 [2024-09-30 21:50:41.309419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.166 [2024-09-30 21:50:41.325784] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:06:57.166 INFO: Running with entropic power schedule (0xFF, 100). 00:06:57.166 INFO: Seed: 2994540745 00:06:57.166 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:57.166 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:57.166 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:06:57.166 INFO: A corpus is not provided, starting from an empty corpus 00:06:57.166 #2 INITED exec/s: 0 rss: 64Mb 00:06:57.166 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:57.166 This may also happen if the target rejected all inputs we tried so far 00:06:57.166 [2024-09-30 21:50:41.401843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.166 [2024-09-30 21:50:41.401887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.426 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:06:57.426 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:06:57.426 #7 NEW cov: 12276 ft: 12256 corp: 2/32b lim: 85 exec/s: 0 rss: 71Mb L: 31/31 MS: 5 InsertByte-EraseBytes-ChangeByte-CrossOver-InsertRepeatedBytes- 00:06:57.426 [2024-09-30 21:50:41.732630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.426 [2024-09-30 21:50:41.732664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.426 #8 NEW cov: 12389 ft: 13028 corp: 3/63b lim: 85 exec/s: 0 rss: 71Mb L: 31/31 MS: 1 ShuffleBytes- 00:06:57.685 [2024-09-30 21:50:41.802765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.685 [2024-09-30 21:50:41.802798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.685 #9 NEW cov: 12395 ft: 13211 corp: 4/91b lim: 85 exec/s: 0 rss: 71Mb L: 28/31 MS: 1 EraseBytes- 00:06:57.685 [2024-09-30 21:50:41.873684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.685 [2024-09-30 21:50:41.873718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.685 [2024-09-30 21:50:41.873793] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:57.685 [2024-09-30 21:50:41.873817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.685 [2024-09-30 21:50:41.873942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:57.685 [2024-09-30 21:50:41.873968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.686 [2024-09-30 21:50:41.874104] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:57.686 [2024-09-30 21:50:41.874133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.686 #10 NEW cov: 12480 ft: 14318 corp: 5/161b lim: 85 exec/s: 0 rss: 71Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:06:57.686 [2024-09-30 21:50:41.933206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.686 [2024-09-30 21:50:41.933231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.686 #15 NEW cov: 12480 ft: 14466 corp: 6/190b lim: 85 exec/s: 0 rss: 71Mb L: 29/70 MS: 5 ShuffleBytes-CopyPart-CopyPart-ChangeBit-CrossOver- 00:06:57.686 [2024-09-30 21:50:41.984408] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.686 [2024-09-30 21:50:41.984442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.686 [2024-09-30 21:50:41.984529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:57.686 [2024-09-30 21:50:41.984565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.686 [2024-09-30 21:50:41.984685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:57.686 [2024-09-30 21:50:41.984711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:57.686 [2024-09-30 21:50:41.984828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:57.686 [2024-09-30 21:50:41.984854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:57.686 [2024-09-30 21:50:41.984974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:06:57.686 [2024-09-30 21:50:41.984999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:06:57.686 #16 NEW cov: 12480 ft: 14575 corp: 7/275b lim: 85 exec/s: 0 rss: 71Mb L: 85/85 MS: 1 CrossOver- 00:06:57.686 [2024-09-30 21:50:42.053534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.686 [2024-09-30 21:50:42.053567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.945 #17 NEW cov: 12480 ft: 14679 corp: 8/305b lim: 85 exec/s: 0 rss: 71Mb L: 30/85 MS: 1 CMP- DE: "\017\000"- 00:06:57.945 [2024-09-30 21:50:42.123970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.945 [2024-09-30 21:50:42.124003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.945 [2024-09-30 21:50:42.124145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:57.945 [2024-09-30 21:50:42.124169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.945 #18 NEW cov: 12480 ft: 15020 corp: 9/344b lim: 85 exec/s: 0 rss: 71Mb L: 39/85 MS: 1 CMP- DE: "\222|\032\225\371=f\000"- 00:06:57.945 [2024-09-30 21:50:42.174125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.945 [2024-09-30 21:50:42.174156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.945 [2024-09-30 21:50:42.174292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:57.945 [2024-09-30 21:50:42.174321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.945 #19 NEW cov: 12480 ft: 15045 corp: 10/384b lim: 85 exec/s: 0 rss: 71Mb L: 40/85 MS: 1 InsertByte- 00:06:57.945 [2024-09-30 21:50:42.244434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.945 [2024-09-30 21:50:42.244465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.945 [2024-09-30 21:50:42.244590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:57.945 [2024-09-30 21:50:42.244616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.945 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:06:57.945 #20 NEW cov: 12503 ft: 15112 corp: 11/423b lim: 85 exec/s: 0 rss: 71Mb L: 39/85 MS: 1 PersAutoDict- DE: "\017\000"- 00:06:57.945 [2024-09-30 21:50:42.294783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:57.945 [2024-09-30 21:50:42.294818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:57.945 [2024-09-30 21:50:42.294942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:57.945 [2024-09-30 21:50:42.294967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:57.945 [2024-09-30 21:50:42.295088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:57.946 [2024-09-30 21:50:42.295116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.205 #21 NEW cov: 12503 ft: 15410 corp: 12/475b lim: 85 exec/s: 0 rss: 71Mb L: 52/85 MS: 1 InsertRepeatedBytes- 00:06:58.206 [2024-09-30 21:50:42.344710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.206 [2024-09-30 21:50:42.344744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.206 [2024-09-30 21:50:42.344875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.206 [2024-09-30 21:50:42.344902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.206 #22 NEW cov: 12503 ft: 15558 corp: 13/511b lim: 85 exec/s: 22 rss: 71Mb L: 36/85 MS: 1 PersAutoDict- DE: "\222|\032\225\371=f\000"- 00:06:58.206 [2024-09-30 21:50:42.405013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.206 [2024-09-30 21:50:42.405049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.206 [2024-09-30 21:50:42.405176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.206 [2024-09-30 21:50:42.405205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.206 [2024-09-30 21:50:42.405328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.206 [2024-09-30 21:50:42.405350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.206 #23 NEW cov: 12503 ft: 15576 corp: 14/563b lim: 85 exec/s: 23 rss: 72Mb L: 52/85 MS: 1 PersAutoDict- DE: "\222|\032\225\371=f\000"- 00:06:58.206 [2024-09-30 21:50:42.474811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.206 [2024-09-30 21:50:42.474842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.206 #24 NEW cov: 12503 ft: 15664 corp: 15/582b lim: 85 exec/s: 24 rss: 72Mb L: 19/85 MS: 1 EraseBytes- 00:06:58.206 [2024-09-30 21:50:42.525252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.206 [2024-09-30 21:50:42.525287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.206 [2024-09-30 21:50:42.525391] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.206 [2024-09-30 21:50:42.525415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.206 #25 NEW cov: 12503 ft: 15673 corp: 16/622b lim: 85 exec/s: 25 rss: 72Mb L: 40/85 MS: 1 ChangeBit- 00:06:58.465 [2024-09-30 21:50:42.595715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.465 [2024-09-30 21:50:42.595751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.595815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.465 [2024-09-30 21:50:42.595839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.595954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.465 [2024-09-30 21:50:42.595973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.465 #26 NEW cov: 12503 ft: 15715 corp: 17/674b lim: 85 exec/s: 26 rss: 72Mb L: 52/85 MS: 1 ChangeBit- 00:06:58.465 [2024-09-30 21:50:42.646154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.465 [2024-09-30 21:50:42.646187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.646281] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.465 [2024-09-30 21:50:42.646315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.646429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.465 [2024-09-30 21:50:42.646451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.646569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:58.465 [2024-09-30 21:50:42.646596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:58.465 #27 NEW cov: 12503 ft: 15765 corp: 18/744b lim: 85 exec/s: 27 rss: 72Mb L: 70/85 MS: 1 ChangeBit- 00:06:58.465 [2024-09-30 21:50:42.696010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.465 [2024-09-30 21:50:42.696044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.696136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.465 [2024-09-30 21:50:42.696157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.465 [2024-09-30 21:50:42.696276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.465 [2024-09-30 21:50:42.696300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.465 #28 NEW cov: 12503 ft: 15785 corp: 19/796b lim: 85 exec/s: 28 rss: 72Mb L: 52/85 MS: 1 ShuffleBytes- 00:06:58.466 [2024-09-30 21:50:42.765807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.466 [2024-09-30 21:50:42.765834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.466 #29 NEW cov: 12503 ft: 15810 corp: 20/813b lim: 85 exec/s: 29 rss: 72Mb L: 17/85 MS: 1 EraseBytes- 00:06:58.466 [2024-09-30 21:50:42.815806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.466 [2024-09-30 21:50:42.815843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.726 #30 NEW cov: 12503 ft: 15841 corp: 21/833b lim: 85 exec/s: 30 rss: 72Mb L: 20/85 MS: 1 EraseBytes- 00:06:58.726 [2024-09-30 21:50:42.866640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.726 [2024-09-30 21:50:42.866672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.726 [2024-09-30 21:50:42.866747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.726 [2024-09-30 21:50:42.866769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.726 [2024-09-30 21:50:42.866888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.726 [2024-09-30 21:50:42.866914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.726 #31 NEW cov: 12503 ft: 15922 corp: 22/885b lim: 85 exec/s: 31 rss: 72Mb L: 52/85 MS: 1 ChangeByte- 00:06:58.726 [2024-09-30 21:50:42.936826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.726 [2024-09-30 21:50:42.936859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.726 [2024-09-30 21:50:42.936943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.726 [2024-09-30 21:50:42.936965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.726 [2024-09-30 21:50:42.937085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.726 [2024-09-30 21:50:42.937107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.726 #32 NEW cov: 12503 ft: 15937 corp: 23/937b lim: 85 exec/s: 32 rss: 72Mb L: 52/85 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\200"- 00:06:58.726 [2024-09-30 21:50:42.986469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.726 [2024-09-30 21:50:42.986501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.726 #33 NEW cov: 12503 ft: 15954 corp: 24/956b lim: 85 exec/s: 33 rss: 72Mb L: 19/85 MS: 1 ChangeByte- 00:06:58.726 [2024-09-30 21:50:43.056830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.726 [2024-09-30 21:50:43.056860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.726 [2024-09-30 21:50:43.056970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.726 [2024-09-30 21:50:43.056995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.986 #34 NEW cov: 12503 ft: 16018 corp: 25/993b lim: 85 exec/s: 34 rss: 72Mb L: 37/85 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\200"- 00:06:58.986 [2024-09-30 21:50:43.126857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.986 [2024-09-30 21:50:43.126882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.986 #35 NEW cov: 12503 ft: 16026 corp: 26/1013b lim: 85 exec/s: 35 rss: 72Mb L: 20/85 MS: 1 ShuffleBytes- 00:06:58.986 [2024-09-30 21:50:43.196953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.986 [2024-09-30 21:50:43.196986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.986 #36 NEW cov: 12503 ft: 16103 corp: 27/1041b lim: 85 exec/s: 36 rss: 72Mb L: 28/85 MS: 1 PersAutoDict- DE: "\017\000"- 00:06:58.986 [2024-09-30 21:50:43.247788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.986 [2024-09-30 21:50:43.247819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:58.986 [2024-09-30 21:50:43.247911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:58.986 [2024-09-30 21:50:43.247930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:58.986 [2024-09-30 21:50:43.248053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:58.986 [2024-09-30 21:50:43.248075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:58.986 #37 NEW cov: 12503 ft: 16125 corp: 28/1093b lim: 85 exec/s: 37 rss: 72Mb L: 52/85 MS: 1 ChangeBinInt- 00:06:58.986 [2024-09-30 21:50:43.317363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:58.986 [2024-09-30 21:50:43.317398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.245 #38 NEW cov: 12503 ft: 16148 corp: 29/1113b lim: 85 exec/s: 38 rss: 73Mb L: 20/85 MS: 1 ShuffleBytes- 00:06:59.245 [2024-09-30 21:50:43.388396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:06:59.245 [2024-09-30 21:50:43.388433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.245 [2024-09-30 21:50:43.388532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:06:59.245 [2024-09-30 21:50:43.388554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.246 [2024-09-30 21:50:43.388670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:06:59.246 [2024-09-30 21:50:43.388695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:06:59.246 [2024-09-30 21:50:43.388816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:06:59.246 [2024-09-30 21:50:43.388841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:06:59.246 #39 NEW cov: 12503 ft: 16203 corp: 30/1189b lim: 85 exec/s: 19 rss: 73Mb L: 76/85 MS: 1 InsertRepeatedBytes- 00:06:59.246 #39 DONE cov: 12503 ft: 16203 corp: 30/1189b lim: 85 exec/s: 19 rss: 73Mb 00:06:59.246 ###### Recommended dictionary. ###### 00:06:59.246 "\017\000" # Uses: 2 00:06:59.246 "\222|\032\225\371=f\000" # Uses: 2 00:06:59.246 "\000\000\000\000\000\000\000\200" # Uses: 1 00:06:59.246 ###### End of recommended dictionary. ###### 00:06:59.246 Done 39 runs in 2 second(s) 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:06:59.246 21:50:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:06:59.246 [2024-09-30 21:50:43.586656] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:59.246 [2024-09-30 21:50:43.586744] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054725 ] 00:06:59.506 [2024-09-30 21:50:43.761255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.506 [2024-09-30 21:50:43.826357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.767 [2024-09-30 21:50:43.885101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.767 [2024-09-30 21:50:43.901469] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:06:59.767 INFO: Running with entropic power schedule (0xFF, 100). 00:06:59.767 INFO: Seed: 1273569633 00:06:59.767 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:06:59.767 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:06:59.767 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:06:59.767 INFO: A corpus is not provided, starting from an empty corpus 00:06:59.767 #2 INITED exec/s: 0 rss: 65Mb 00:06:59.767 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:06:59.767 This may also happen if the target rejected all inputs we tried so far 00:06:59.767 [2024-09-30 21:50:43.946815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:06:59.767 [2024-09-30 21:50:43.946848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:06:59.767 [2024-09-30 21:50:43.946908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:06:59.767 [2024-09-30 21:50:43.946928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:06:59.767 [2024-09-30 21:50:43.946997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:06:59.767 [2024-09-30 21:50:43.947022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.027 NEW_FUNC[1/715]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:07:00.027 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:00.027 #10 NEW cov: 12209 ft: 12204 corp: 2/20b lim: 25 exec/s: 0 rss: 73Mb L: 19/19 MS: 3 CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:07:00.027 [2024-09-30 21:50:44.277519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.027 [2024-09-30 21:50:44.277555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.277621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.027 [2024-09-30 21:50:44.277641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.027 #16 NEW cov: 12322 ft: 13087 corp: 3/34b lim: 25 exec/s: 0 rss: 73Mb L: 14/19 MS: 1 EraseBytes- 00:07:00.027 [2024-09-30 21:50:44.337958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.027 [2024-09-30 21:50:44.337988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.338044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.027 [2024-09-30 21:50:44.338066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.338130] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.027 [2024-09-30 21:50:44.338153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.338215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.027 [2024-09-30 21:50:44.338235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.338298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.027 [2024-09-30 21:50:44.338321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.027 #17 NEW cov: 12328 ft: 13710 corp: 4/59b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:00.027 [2024-09-30 21:50:44.378029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.027 [2024-09-30 21:50:44.378059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.378115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.027 [2024-09-30 21:50:44.378136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.378201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.027 [2024-09-30 21:50:44.378220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.378282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.027 [2024-09-30 21:50:44.378301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.027 [2024-09-30 21:50:44.378376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.027 [2024-09-30 21:50:44.378396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.287 #18 NEW cov: 12413 ft: 13967 corp: 5/84b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:00.287 [2024-09-30 21:50:44.437998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.287 [2024-09-30 21:50:44.438026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.438083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.287 [2024-09-30 21:50:44.438104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.438170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.287 [2024-09-30 21:50:44.438188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.287 #19 NEW cov: 12413 ft: 14080 corp: 6/103b lim: 25 exec/s: 0 rss: 73Mb L: 19/25 MS: 1 ChangeByte- 00:07:00.287 [2024-09-30 21:50:44.478315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.287 [2024-09-30 21:50:44.478345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.478406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.287 [2024-09-30 21:50:44.478428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.478494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.287 [2024-09-30 21:50:44.478516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.478579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.287 [2024-09-30 21:50:44.478597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.478661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.287 [2024-09-30 21:50:44.478680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.287 #25 NEW cov: 12413 ft: 14192 corp: 7/128b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:07:00.287 [2024-09-30 21:50:44.538484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.287 [2024-09-30 21:50:44.538513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.538571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.287 [2024-09-30 21:50:44.538593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.538659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.287 [2024-09-30 21:50:44.538698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.538766] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.287 [2024-09-30 21:50:44.538785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.538858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.287 [2024-09-30 21:50:44.538877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.287 #26 NEW cov: 12413 ft: 14237 corp: 8/153b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 CrossOver- 00:07:00.287 [2024-09-30 21:50:44.578237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.287 [2024-09-30 21:50:44.578264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.578334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.287 [2024-09-30 21:50:44.578357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.287 #27 NEW cov: 12413 ft: 14322 corp: 9/163b lim: 25 exec/s: 0 rss: 73Mb L: 10/25 MS: 1 CrossOver- 00:07:00.287 [2024-09-30 21:50:44.618683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.287 [2024-09-30 21:50:44.618711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.618767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.287 [2024-09-30 21:50:44.618789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.618852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.287 [2024-09-30 21:50:44.618873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.618937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.287 [2024-09-30 21:50:44.618957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.287 [2024-09-30 21:50:44.619022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.287 [2024-09-30 21:50:44.619043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.546 #28 NEW cov: 12413 ft: 14412 corp: 10/188b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:00.546 [2024-09-30 21:50:44.678623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.546 [2024-09-30 21:50:44.678651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.546 [2024-09-30 21:50:44.678708] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.547 [2024-09-30 21:50:44.678730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.678795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.547 [2024-09-30 21:50:44.678813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.547 #29 NEW cov: 12413 ft: 14446 corp: 11/207b lim: 25 exec/s: 0 rss: 73Mb L: 19/25 MS: 1 ChangeByte- 00:07:00.547 [2024-09-30 21:50:44.718959] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.547 [2024-09-30 21:50:44.718988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.719043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.547 [2024-09-30 21:50:44.719068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.719133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.547 [2024-09-30 21:50:44.719152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.719212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.547 [2024-09-30 21:50:44.719231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.719293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.547 [2024-09-30 21:50:44.719314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.547 #30 NEW cov: 12413 ft: 14481 corp: 12/232b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ChangeBit- 00:07:00.547 [2024-09-30 21:50:44.759086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.547 [2024-09-30 21:50:44.759115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.759169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.547 [2024-09-30 21:50:44.759190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.759254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.547 [2024-09-30 21:50:44.759274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.759342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.547 [2024-09-30 21:50:44.759361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.759425] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.547 [2024-09-30 21:50:44.759445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.547 #31 NEW cov: 12413 ft: 14503 corp: 13/257b lim: 25 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:00.547 [2024-09-30 21:50:44.818810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.547 [2024-09-30 21:50:44.818839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.547 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:00.547 #35 NEW cov: 12436 ft: 14995 corp: 14/265b lim: 25 exec/s: 0 rss: 74Mb L: 8/25 MS: 4 CrossOver-ChangeBit-ShuffleBytes-CopyPart- 00:07:00.547 [2024-09-30 21:50:44.879093] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.547 [2024-09-30 21:50:44.879121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.547 [2024-09-30 21:50:44.879187] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.547 [2024-09-30 21:50:44.879211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.547 #37 NEW cov: 12436 ft: 15010 corp: 15/277b lim: 25 exec/s: 0 rss: 74Mb L: 12/25 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:07:00.806 [2024-09-30 21:50:44.919209] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.806 [2024-09-30 21:50:44.919241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.806 [2024-09-30 21:50:44.919312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.806 [2024-09-30 21:50:44.919332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.806 #38 NEW cov: 12436 ft: 15039 corp: 16/289b lim: 25 exec/s: 38 rss: 74Mb L: 12/25 MS: 1 ChangeByte- 00:07:00.806 [2024-09-30 21:50:44.979731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.806 [2024-09-30 21:50:44.979760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.806 [2024-09-30 21:50:44.979815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.806 [2024-09-30 21:50:44.979837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.806 [2024-09-30 21:50:44.979901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.806 [2024-09-30 21:50:44.979921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.806 [2024-09-30 21:50:44.979982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.806 [2024-09-30 21:50:44.980001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.806 [2024-09-30 21:50:44.980063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.806 [2024-09-30 21:50:44.980081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.807 #39 NEW cov: 12436 ft: 15070 corp: 17/314b lim: 25 exec/s: 39 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:00.807 [2024-09-30 21:50:45.039530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.807 [2024-09-30 21:50:45.039558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.039623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.807 [2024-09-30 21:50:45.039644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.807 #45 NEW cov: 12436 ft: 15155 corp: 18/324b lim: 25 exec/s: 45 rss: 74Mb L: 10/25 MS: 1 ChangeByte- 00:07:00.807 [2024-09-30 21:50:45.100036] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.807 [2024-09-30 21:50:45.100063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.100118] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.807 [2024-09-30 21:50:45.100140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.100222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.807 [2024-09-30 21:50:45.100243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.100312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.807 [2024-09-30 21:50:45.100333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.100397] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.807 [2024-09-30 21:50:45.100420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:00.807 #46 NEW cov: 12436 ft: 15178 corp: 19/349b lim: 25 exec/s: 46 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:00.807 [2024-09-30 21:50:45.160206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:00.807 [2024-09-30 21:50:45.160234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.160287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:00.807 [2024-09-30 21:50:45.160313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.160379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:00.807 [2024-09-30 21:50:45.160402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.160468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:00.807 [2024-09-30 21:50:45.160487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:00.807 [2024-09-30 21:50:45.160550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:00.807 [2024-09-30 21:50:45.160568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.067 #47 NEW cov: 12436 ft: 15195 corp: 20/374b lim: 25 exec/s: 47 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:07:01.067 [2024-09-30 21:50:45.199835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.067 [2024-09-30 21:50:45.199863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.067 #51 NEW cov: 12436 ft: 15217 corp: 21/379b lim: 25 exec/s: 51 rss: 74Mb L: 5/25 MS: 4 InsertByte-InsertByte-InsertByte-CopyPart- 00:07:01.067 [2024-09-30 21:50:45.240400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.067 [2024-09-30 21:50:45.240428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.240484] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.067 [2024-09-30 21:50:45.240505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.240569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.067 [2024-09-30 21:50:45.240588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.240651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.067 [2024-09-30 21:50:45.240671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.240733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.067 [2024-09-30 21:50:45.240752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.067 #52 NEW cov: 12436 ft: 15238 corp: 22/404b lim: 25 exec/s: 52 rss: 74Mb L: 25/25 MS: 1 ChangeBinInt- 00:07:01.067 [2024-09-30 21:50:45.280533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.067 [2024-09-30 21:50:45.280561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.280618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.067 [2024-09-30 21:50:45.280638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.280703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.067 [2024-09-30 21:50:45.280722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.280785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.067 [2024-09-30 21:50:45.280804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.280868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.067 [2024-09-30 21:50:45.280886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.067 #53 NEW cov: 12436 ft: 15250 corp: 23/429b lim: 25 exec/s: 53 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:01.067 [2024-09-30 21:50:45.320646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.067 [2024-09-30 21:50:45.320675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.320728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.067 [2024-09-30 21:50:45.320749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.320814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.067 [2024-09-30 21:50:45.320834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.320897] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.067 [2024-09-30 21:50:45.320916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.067 [2024-09-30 21:50:45.320979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.067 [2024-09-30 21:50:45.320996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.067 #54 NEW cov: 12436 ft: 15258 corp: 24/454b lim: 25 exec/s: 54 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:01.068 [2024-09-30 21:50:45.380832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.068 [2024-09-30 21:50:45.380861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.068 [2024-09-30 21:50:45.380917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.068 [2024-09-30 21:50:45.380936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.068 [2024-09-30 21:50:45.381004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.068 [2024-09-30 21:50:45.381026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.068 [2024-09-30 21:50:45.381089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.068 [2024-09-30 21:50:45.381107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.068 [2024-09-30 21:50:45.381175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.068 [2024-09-30 21:50:45.381195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.068 #55 NEW cov: 12436 ft: 15271 corp: 25/479b lim: 25 exec/s: 55 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:01.068 [2024-09-30 21:50:45.420462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.068 [2024-09-30 21:50:45.420490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.328 #56 NEW cov: 12436 ft: 15337 corp: 26/485b lim: 25 exec/s: 56 rss: 74Mb L: 6/25 MS: 1 EraseBytes- 00:07:01.328 [2024-09-30 21:50:45.460888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.328 [2024-09-30 21:50:45.460916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.460970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.328 [2024-09-30 21:50:45.460991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.461055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.328 [2024-09-30 21:50:45.461075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.461138] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.328 [2024-09-30 21:50:45.461157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.328 #57 NEW cov: 12436 ft: 15365 corp: 27/505b lim: 25 exec/s: 57 rss: 74Mb L: 20/25 MS: 1 EraseBytes- 00:07:01.328 [2024-09-30 21:50:45.500907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.328 [2024-09-30 21:50:45.500937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.500998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.328 [2024-09-30 21:50:45.501019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.501084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.328 [2024-09-30 21:50:45.501103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.328 #58 NEW cov: 12436 ft: 15380 corp: 28/520b lim: 25 exec/s: 58 rss: 74Mb L: 15/25 MS: 1 EraseBytes- 00:07:01.328 [2024-09-30 21:50:45.560873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.328 [2024-09-30 21:50:45.560902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.328 #59 NEW cov: 12436 ft: 15433 corp: 29/527b lim: 25 exec/s: 59 rss: 74Mb L: 7/25 MS: 1 InsertRepeatedBytes- 00:07:01.328 [2024-09-30 21:50:45.601428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.328 [2024-09-30 21:50:45.601457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.601511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.328 [2024-09-30 21:50:45.601534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.601599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.328 [2024-09-30 21:50:45.601623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.601685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.328 [2024-09-30 21:50:45.601704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.601769] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.328 [2024-09-30 21:50:45.601788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.328 #60 NEW cov: 12436 ft: 15468 corp: 30/552b lim: 25 exec/s: 60 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:01.328 [2024-09-30 21:50:45.641409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.328 [2024-09-30 21:50:45.641438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.641497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.328 [2024-09-30 21:50:45.641519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.641601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.328 [2024-09-30 21:50:45.641625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.328 [2024-09-30 21:50:45.641692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.328 [2024-09-30 21:50:45.641714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.328 #61 NEW cov: 12436 ft: 15484 corp: 31/572b lim: 25 exec/s: 61 rss: 74Mb L: 20/25 MS: 1 ChangeBit- 00:07:01.589 [2024-09-30 21:50:45.701747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.589 [2024-09-30 21:50:45.701776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.701829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.589 [2024-09-30 21:50:45.701850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.701914] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.589 [2024-09-30 21:50:45.701934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.701997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.589 [2024-09-30 21:50:45.702015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.702079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.589 [2024-09-30 21:50:45.702097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.589 #62 NEW cov: 12436 ft: 15486 corp: 32/597b lim: 25 exec/s: 62 rss: 74Mb L: 25/25 MS: 1 CrossOver- 00:07:01.589 [2024-09-30 21:50:45.741502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.589 [2024-09-30 21:50:45.741530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.741604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.589 [2024-09-30 21:50:45.741626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.589 #63 NEW cov: 12436 ft: 15493 corp: 33/607b lim: 25 exec/s: 63 rss: 74Mb L: 10/25 MS: 1 CMP- DE: "\304\037j\260\373=f\000"- 00:07:01.589 [2024-09-30 21:50:45.781944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.589 [2024-09-30 21:50:45.781973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.782029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.589 [2024-09-30 21:50:45.782050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.782116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.589 [2024-09-30 21:50:45.782137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.782198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.589 [2024-09-30 21:50:45.782216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.782279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.589 [2024-09-30 21:50:45.782299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.589 #64 NEW cov: 12436 ft: 15501 corp: 34/632b lim: 25 exec/s: 64 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:07:01.589 [2024-09-30 21:50:45.822025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.589 [2024-09-30 21:50:45.822054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.822110] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.589 [2024-09-30 21:50:45.822132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.822196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.589 [2024-09-30 21:50:45.822215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.822276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.589 [2024-09-30 21:50:45.822294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.822361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.589 [2024-09-30 21:50:45.822380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.589 #65 NEW cov: 12436 ft: 15513 corp: 35/657b lim: 25 exec/s: 65 rss: 75Mb L: 25/25 MS: 1 PersAutoDict- DE: "\304\037j\260\373=f\000"- 00:07:01.589 [2024-09-30 21:50:45.882120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.589 [2024-09-30 21:50:45.882148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.882205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.589 [2024-09-30 21:50:45.882225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.882292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.589 [2024-09-30 21:50:45.882315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.882379] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.589 [2024-09-30 21:50:45.882398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.589 #66 NEW cov: 12436 ft: 15523 corp: 36/677b lim: 25 exec/s: 66 rss: 75Mb L: 20/25 MS: 1 ChangeBit- 00:07:01.589 [2024-09-30 21:50:45.922292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:07:01.589 [2024-09-30 21:50:45.922325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.922383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:07:01.589 [2024-09-30 21:50:45.922405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.922469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:07:01.589 [2024-09-30 21:50:45.922492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.922554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:07:01.589 [2024-09-30 21:50:45.922573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:01.589 [2024-09-30 21:50:45.922637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:07:01.589 [2024-09-30 21:50:45.922655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:01.589 #67 NEW cov: 12436 ft: 15525 corp: 37/702b lim: 25 exec/s: 33 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:07:01.589 #67 DONE cov: 12436 ft: 15525 corp: 37/702b lim: 25 exec/s: 33 rss: 75Mb 00:07:01.589 ###### Recommended dictionary. ###### 00:07:01.589 "\304\037j\260\373=f\000" # Uses: 1 00:07:01.589 ###### End of recommended dictionary. ###### 00:07:01.589 Done 67 runs in 2 second(s) 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:07:01.849 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:01.850 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:01.850 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:01.850 21:50:46 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:07:01.850 [2024-09-30 21:50:46.115461] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:01.850 [2024-09-30 21:50:46.115554] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055254 ] 00:07:02.109 [2024-09-30 21:50:46.287511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.109 [2024-09-30 21:50:46.353117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.109 [2024-09-30 21:50:46.411851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.109 [2024-09-30 21:50:46.428224] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:07:02.109 INFO: Running with entropic power schedule (0xFF, 100). 00:07:02.109 INFO: Seed: 3799575059 00:07:02.109 INFO: Loaded 1 modules (383956 inline 8-bit counters): 383956 [0x2be214c, 0x2c3fd20), 00:07:02.109 INFO: Loaded 1 PC tables (383956 PCs): 383956 [0x2c3fd20,0x321ba60), 00:07:02.109 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:07:02.109 INFO: A corpus is not provided, starting from an empty corpus 00:07:02.109 #2 INITED exec/s: 0 rss: 65Mb 00:07:02.109 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:02.109 This may also happen if the target rejected all inputs we tried so far 00:07:02.109 [2024-09-30 21:50:46.476778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998704417873 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.109 [2024-09-30 21:50:46.476809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.628 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:07:02.629 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:02.629 #17 NEW cov: 12281 ft: 12269 corp: 2/27b lim: 100 exec/s: 0 rss: 73Mb L: 26/26 MS: 5 InsertRepeatedBytes-ChangeBinInt-CMP-ShuffleBytes-InsertRepeatedBytes- DE: "\377\034"- 00:07:02.629 [2024-09-30 21:50:46.807707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.629 [2024-09-30 21:50:46.807742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.629 #18 NEW cov: 12394 ft: 12896 corp: 3/61b lim: 100 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CMP- DE: "\376\003\000\000\000\000\000\000"- 00:07:02.629 [2024-09-30 21:50:46.867828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.629 [2024-09-30 21:50:46.867858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.629 #19 NEW cov: 12400 ft: 13175 corp: 4/95b lim: 100 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 CopyPart- 00:07:02.629 [2024-09-30 21:50:46.927938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.629 [2024-09-30 21:50:46.927970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.629 #20 NEW cov: 12485 ft: 13386 corp: 5/129b lim: 100 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:07:02.629 [2024-09-30 21:50:46.988086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:594 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.629 [2024-09-30 21:50:46.988115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.888 #21 NEW cov: 12485 ft: 13523 corp: 6/163b lim: 100 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeBit- 00:07:02.888 [2024-09-30 21:50:47.048287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:8705 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.888 [2024-09-30 21:50:47.048320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.888 #22 NEW cov: 12485 ft: 13673 corp: 7/197b lim: 100 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeBinInt- 00:07:02.888 [2024-09-30 21:50:47.088420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.888 [2024-09-30 21:50:47.088449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.888 #23 NEW cov: 12485 ft: 13731 corp: 8/231b lim: 100 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:07:02.888 [2024-09-30 21:50:47.128544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216173130190626046 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.888 [2024-09-30 21:50:47.128574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.888 #24 NEW cov: 12485 ft: 13841 corp: 9/258b lim: 100 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 EraseBytes- 00:07:02.888 [2024-09-30 21:50:47.188715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998704417873 len:20781 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.888 [2024-09-30 21:50:47.188744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:02.888 #30 NEW cov: 12485 ft: 13854 corp: 10/284b lim: 100 exec/s: 0 rss: 73Mb L: 26/34 MS: 1 ChangeByte- 00:07:02.888 [2024-09-30 21:50:47.228810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:02.888 [2024-09-30 21:50:47.228839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.148 #31 NEW cov: 12485 ft: 13907 corp: 11/311b lim: 100 exec/s: 0 rss: 73Mb L: 27/34 MS: 1 CrossOver- 00:07:03.148 [2024-09-30 21:50:47.288992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998704417873 len:20781 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.289020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.148 #32 NEW cov: 12485 ft: 13921 corp: 12/345b lim: 100 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 PersAutoDict- DE: "\376\003\000\000\000\000\000\000"- 00:07:03.148 [2024-09-30 21:50:47.349135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216735732251696382 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.349164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.148 NEW_FUNC[1/1]: 0x1bf7428 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:03.148 #33 NEW cov: 12508 ft: 13962 corp: 13/372b lim: 100 exec/s: 0 rss: 74Mb L: 27/34 MS: 1 ChangeBit- 00:07:03.148 [2024-09-30 21:50:47.409700] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.409728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.148 [2024-09-30 21:50:47.409773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.409789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.148 [2024-09-30 21:50:47.409844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5859553999884210513 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.409860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.148 #34 NEW cov: 12508 ft: 14781 corp: 14/434b lim: 100 exec/s: 0 rss: 74Mb L: 62/62 MS: 1 InsertRepeatedBytes- 00:07:03.148 [2024-09-30 21:50:47.469483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998972853329 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.469512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.148 #35 NEW cov: 12508 ft: 14790 corp: 15/460b lim: 100 exec/s: 35 rss: 74Mb L: 26/62 MS: 1 ChangeBinInt- 00:07:03.148 [2024-09-30 21:50:47.509618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.148 [2024-09-30 21:50:47.509647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.408 #36 NEW cov: 12508 ft: 14810 corp: 16/494b lim: 100 exec/s: 36 rss: 74Mb L: 34/62 MS: 1 ShuffleBytes- 00:07:03.408 [2024-09-30 21:50:47.550078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.550105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.408 [2024-09-30 21:50:47.550154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073697034239 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.550170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.408 [2024-09-30 21:50:47.550226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5859553999884210513 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.550243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:03.408 #37 NEW cov: 12508 ft: 14849 corp: 17/556b lim: 100 exec/s: 37 rss: 74Mb L: 62/62 MS: 1 ChangeByte- 00:07:03.408 [2024-09-30 21:50:47.609888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998972853329 len:20991 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.609916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.408 #38 NEW cov: 12508 ft: 14908 corp: 18/577b lim: 100 exec/s: 38 rss: 74Mb L: 21/62 MS: 1 EraseBytes- 00:07:03.408 [2024-09-30 21:50:47.670215] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.670242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.408 [2024-09-30 21:50:47.670281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18374126829617369425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.670304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:03.408 #39 NEW cov: 12508 ft: 15297 corp: 19/619b lim: 100 exec/s: 39 rss: 74Mb L: 42/62 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\002"- 00:07:03.408 [2024-09-30 21:50:47.730232] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172932622130430 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.730261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.408 #40 NEW cov: 12508 ft: 15366 corp: 20/646b lim: 100 exec/s: 40 rss: 74Mb L: 27/62 MS: 1 ChangeByte- 00:07:03.408 [2024-09-30 21:50:47.770317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998972853329 len:20991 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.408 [2024-09-30 21:50:47.770346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.667 #41 NEW cov: 12508 ft: 15454 corp: 21/667b lim: 100 exec/s: 41 rss: 74Mb L: 21/62 MS: 1 ChangeBinInt- 00:07:03.668 [2024-09-30 21:50:47.830488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998972853329 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.668 [2024-09-30 21:50:47.830517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.668 #42 NEW cov: 12508 ft: 15481 corp: 22/693b lim: 100 exec/s: 42 rss: 74Mb L: 26/62 MS: 1 CopyPart- 00:07:03.668 [2024-09-30 21:50:47.870600] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:432345564412058878 len:8705 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.668 [2024-09-30 21:50:47.870628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.668 #43 NEW cov: 12508 ft: 15491 corp: 23/727b lim: 100 exec/s: 43 rss: 74Mb L: 34/62 MS: 1 ChangeBinInt- 00:07:03.668 [2024-09-30 21:50:47.910704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.668 [2024-09-30 21:50:47.910731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.668 #44 NEW cov: 12508 ft: 15498 corp: 24/761b lim: 100 exec/s: 44 rss: 74Mb L: 34/62 MS: 1 ShuffleBytes- 00:07:03.668 [2024-09-30 21:50:47.950826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.668 [2024-09-30 21:50:47.950854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.668 #45 NEW cov: 12508 ft: 15504 corp: 25/795b lim: 100 exec/s: 45 rss: 74Mb L: 34/62 MS: 1 CrossOver- 00:07:03.668 [2024-09-30 21:50:47.990982] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172782298275070 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.668 [2024-09-30 21:50:47.991009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.668 #46 NEW cov: 12508 ft: 15511 corp: 26/818b lim: 100 exec/s: 46 rss: 75Mb L: 23/62 MS: 1 EraseBytes- 00:07:03.927 [2024-09-30 21:50:48.051168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998972853329 len:20991 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.927 [2024-09-30 21:50:48.051196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.927 #47 NEW cov: 12508 ft: 15517 corp: 27/839b lim: 100 exec/s: 47 rss: 75Mb L: 21/62 MS: 1 ChangeBit- 00:07:03.927 [2024-09-30 21:50:48.091297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216172786593242366 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.927 [2024-09-30 21:50:48.091333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.927 #48 NEW cov: 12508 ft: 15524 corp: 28/873b lim: 100 exec/s: 48 rss: 75Mb L: 34/62 MS: 1 ChangeBinInt- 00:07:03.927 [2024-09-30 21:50:48.131388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216173130190626046 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.927 [2024-09-30 21:50:48.131416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.927 #49 NEW cov: 12508 ft: 15533 corp: 29/903b lim: 100 exec/s: 49 rss: 75Mb L: 30/62 MS: 1 CopyPart- 00:07:03.927 [2024-09-30 21:50:48.171496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:432345564412058878 len:8705 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.927 [2024-09-30 21:50:48.171524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.927 #50 NEW cov: 12508 ft: 15537 corp: 30/937b lim: 100 exec/s: 50 rss: 75Mb L: 34/62 MS: 1 ShuffleBytes- 00:07:03.927 [2024-09-30 21:50:48.231667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:432345564412058878 len:8705 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.927 [2024-09-30 21:50:48.231696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:03.927 #51 NEW cov: 12508 ft: 15542 corp: 31/971b lim: 100 exec/s: 51 rss: 75Mb L: 34/62 MS: 1 ShuffleBytes- 00:07:03.927 [2024-09-30 21:50:48.291827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:216173130190626046 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:03.927 [2024-09-30 21:50:48.291856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.187 #54 NEW cov: 12508 ft: 15549 corp: 32/995b lim: 100 exec/s: 54 rss: 75Mb L: 24/62 MS: 3 EraseBytes-InsertByte-CopyPart- 00:07:04.187 [2024-09-30 21:50:48.332314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:452926464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.187 [2024-09-30 21:50:48.332341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.187 [2024-09-30 21:50:48.332392] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.187 [2024-09-30 21:50:48.332409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:04.187 [2024-09-30 21:50:48.332466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:5859553999884210513 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.187 [2024-09-30 21:50:48.332485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:04.187 #55 NEW cov: 12508 ft: 15634 corp: 33/1058b lim: 100 exec/s: 55 rss: 75Mb L: 63/63 MS: 1 InsertRepeatedBytes- 00:07:04.187 [2024-09-30 21:50:48.392140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:71776119229193470 len:82 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.187 [2024-09-30 21:50:48.392169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.187 #56 NEW cov: 12508 ft: 15731 corp: 34/1092b lim: 100 exec/s: 56 rss: 75Mb L: 34/63 MS: 1 ShuffleBytes- 00:07:04.187 [2024-09-30 21:50:48.452290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553998972853329 len:20818 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.187 [2024-09-30 21:50:48.452323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.187 [2024-09-30 21:50:48.512467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5859553651080502353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:04.187 [2024-09-30 21:50:48.512494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:04.187 #58 NEW cov: 12508 ft: 15773 corp: 35/1122b lim: 100 exec/s: 29 rss: 75Mb L: 30/63 MS: 2 InsertByte-PersAutoDict- DE: "\000\000\000\000\000\000\000\002"- 00:07:04.187 #58 DONE cov: 12508 ft: 15773 corp: 35/1122b lim: 100 exec/s: 29 rss: 75Mb 00:07:04.187 ###### Recommended dictionary. ###### 00:07:04.187 "\377\034" # Uses: 0 00:07:04.187 "\376\003\000\000\000\000\000\000" # Uses: 2 00:07:04.187 "\000\000\000\000\000\000\000\002" # Uses: 1 00:07:04.187 ###### End of recommended dictionary. ###### 00:07:04.187 Done 58 runs in 2 second(s) 00:07:04.447 21:50:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:07:04.447 21:50:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:04.447 21:50:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.447 21:50:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:07:04.447 00:07:04.447 real 1m4.372s 00:07:04.447 user 1m40.829s 00:07:04.447 sys 0m7.180s 00:07:04.447 21:50:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.448 21:50:48 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:04.448 ************************************ 00:07:04.448 END TEST nvmf_llvm_fuzz 00:07:04.448 ************************************ 00:07:04.448 21:50:48 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:04.448 21:50:48 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:04.448 21:50:48 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:04.448 21:50:48 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.448 21:50:48 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.448 21:50:48 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:04.448 ************************************ 00:07:04.448 START TEST vfio_llvm_fuzz 00:07:04.448 ************************************ 00:07:04.448 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:07:04.710 * Looking for test storage... 00:07:04.710 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.710 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.710 --rc genhtml_branch_coverage=1 00:07:04.711 --rc genhtml_function_coverage=1 00:07:04.711 --rc genhtml_legend=1 00:07:04.711 --rc geninfo_all_blocks=1 00:07:04.711 --rc geninfo_unexecuted_blocks=1 00:07:04.711 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.711 ' 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.711 --rc genhtml_branch_coverage=1 00:07:04.711 --rc genhtml_function_coverage=1 00:07:04.711 --rc genhtml_legend=1 00:07:04.711 --rc geninfo_all_blocks=1 00:07:04.711 --rc geninfo_unexecuted_blocks=1 00:07:04.711 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.711 ' 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.711 --rc genhtml_branch_coverage=1 00:07:04.711 --rc genhtml_function_coverage=1 00:07:04.711 --rc genhtml_legend=1 00:07:04.711 --rc geninfo_all_blocks=1 00:07:04.711 --rc geninfo_unexecuted_blocks=1 00:07:04.711 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.711 ' 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.711 --rc genhtml_branch_coverage=1 00:07:04.711 --rc genhtml_function_coverage=1 00:07:04.711 --rc genhtml_legend=1 00:07:04.711 --rc geninfo_all_blocks=1 00:07:04.711 --rc geninfo_unexecuted_blocks=1 00:07:04.711 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.711 ' 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:07:04.711 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:04.712 #define SPDK_CONFIG_H 00:07:04.712 #define SPDK_CONFIG_AIO_FSDEV 1 00:07:04.712 #define SPDK_CONFIG_APPS 1 00:07:04.712 #define SPDK_CONFIG_ARCH native 00:07:04.712 #undef SPDK_CONFIG_ASAN 00:07:04.712 #undef SPDK_CONFIG_AVAHI 00:07:04.712 #undef SPDK_CONFIG_CET 00:07:04.712 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:07:04.712 #define SPDK_CONFIG_COVERAGE 1 00:07:04.712 #define SPDK_CONFIG_CROSS_PREFIX 00:07:04.712 #undef SPDK_CONFIG_CRYPTO 00:07:04.712 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:04.712 #undef SPDK_CONFIG_CUSTOMOCF 00:07:04.712 #undef SPDK_CONFIG_DAOS 00:07:04.712 #define SPDK_CONFIG_DAOS_DIR 00:07:04.712 #define SPDK_CONFIG_DEBUG 1 00:07:04.712 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:04.712 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:04.712 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:04.712 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:04.712 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:04.712 #undef SPDK_CONFIG_DPDK_UADK 00:07:04.712 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:04.712 #define SPDK_CONFIG_EXAMPLES 1 00:07:04.712 #undef SPDK_CONFIG_FC 00:07:04.712 #define SPDK_CONFIG_FC_PATH 00:07:04.712 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:04.712 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:04.712 #define SPDK_CONFIG_FSDEV 1 00:07:04.712 #undef SPDK_CONFIG_FUSE 00:07:04.712 #define SPDK_CONFIG_FUZZER 1 00:07:04.712 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:04.712 #undef SPDK_CONFIG_GOLANG 00:07:04.712 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:04.712 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:04.712 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:04.712 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:04.712 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:04.712 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:04.712 #undef SPDK_CONFIG_HAVE_LZ4 00:07:04.712 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:07:04.712 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:07:04.712 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:04.712 #define SPDK_CONFIG_IDXD 1 00:07:04.712 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:04.712 #undef SPDK_CONFIG_IPSEC_MB 00:07:04.712 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:04.712 #define SPDK_CONFIG_ISAL 1 00:07:04.712 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:04.712 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:04.712 #define SPDK_CONFIG_LIBDIR 00:07:04.712 #undef SPDK_CONFIG_LTO 00:07:04.712 #define SPDK_CONFIG_MAX_LCORES 128 00:07:04.712 #define SPDK_CONFIG_NVME_CUSE 1 00:07:04.712 #undef SPDK_CONFIG_OCF 00:07:04.712 #define SPDK_CONFIG_OCF_PATH 00:07:04.712 #define SPDK_CONFIG_OPENSSL_PATH 00:07:04.712 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:04.712 #define SPDK_CONFIG_PGO_DIR 00:07:04.712 #undef SPDK_CONFIG_PGO_USE 00:07:04.712 #define SPDK_CONFIG_PREFIX /usr/local 00:07:04.712 #undef SPDK_CONFIG_RAID5F 00:07:04.712 #undef SPDK_CONFIG_RBD 00:07:04.712 #define SPDK_CONFIG_RDMA 1 00:07:04.712 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:04.712 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:04.712 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:04.712 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:04.712 #undef SPDK_CONFIG_SHARED 00:07:04.712 #undef SPDK_CONFIG_SMA 00:07:04.712 #define SPDK_CONFIG_TESTS 1 00:07:04.712 #undef SPDK_CONFIG_TSAN 00:07:04.712 #define SPDK_CONFIG_UBLK 1 00:07:04.712 #define SPDK_CONFIG_UBSAN 1 00:07:04.712 #undef SPDK_CONFIG_UNIT_TESTS 00:07:04.712 #undef SPDK_CONFIG_URING 00:07:04.712 #define SPDK_CONFIG_URING_PATH 00:07:04.712 #undef SPDK_CONFIG_URING_ZNS 00:07:04.712 #undef SPDK_CONFIG_USDT 00:07:04.712 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:04.712 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:04.712 #define SPDK_CONFIG_VFIO_USER 1 00:07:04.712 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:04.712 #define SPDK_CONFIG_VHOST 1 00:07:04.712 #define SPDK_CONFIG_VIRTIO 1 00:07:04.712 #undef SPDK_CONFIG_VTUNE 00:07:04.712 #define SPDK_CONFIG_VTUNE_DIR 00:07:04.712 #define SPDK_CONFIG_WERROR 1 00:07:04.712 #define SPDK_CONFIG_WPDK_DIR 00:07:04.712 #undef SPDK_CONFIG_XNVME 00:07:04.712 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.712 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:04.713 21:50:48 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 1 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:04.713 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:04.714 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j112 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 1055656 ]] 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 1055656 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.eirCfR 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.eirCfR/tests/vfio /tmp/spdk.eirCfR 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.715 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=678330368 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4606099456 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=53290614784 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=61730631680 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8440016896 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30860550144 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865313792 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=12340133888 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346126336 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5992448 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=30864465920 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=30865317888 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=851968 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=6173048832 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173061120 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:07:04.974 * Looking for test storage... 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=53290614784 00:07:04.974 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10654609408 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:04.975 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.975 --rc genhtml_branch_coverage=1 00:07:04.975 --rc genhtml_function_coverage=1 00:07:04.975 --rc genhtml_legend=1 00:07:04.975 --rc geninfo_all_blocks=1 00:07:04.975 --rc geninfo_unexecuted_blocks=1 00:07:04.975 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.975 ' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.975 --rc genhtml_branch_coverage=1 00:07:04.975 --rc genhtml_function_coverage=1 00:07:04.975 --rc genhtml_legend=1 00:07:04.975 --rc geninfo_all_blocks=1 00:07:04.975 --rc geninfo_unexecuted_blocks=1 00:07:04.975 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.975 ' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.975 --rc genhtml_branch_coverage=1 00:07:04.975 --rc genhtml_function_coverage=1 00:07:04.975 --rc genhtml_legend=1 00:07:04.975 --rc geninfo_all_blocks=1 00:07:04.975 --rc geninfo_unexecuted_blocks=1 00:07:04.975 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.975 ' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.975 --rc genhtml_branch_coverage=1 00:07:04.975 --rc genhtml_function_coverage=1 00:07:04.975 --rc genhtml_legend=1 00:07:04.975 --rc geninfo_all_blocks=1 00:07:04.975 --rc geninfo_unexecuted_blocks=1 00:07:04.975 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:04.975 ' 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:07:04.975 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:04.975 21:50:49 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:07:04.975 [2024-09-30 21:50:49.251833] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:04.975 [2024-09-30 21:50:49.251909] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055880 ] 00:07:04.975 [2024-09-30 21:50:49.323810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.234 [2024-09-30 21:50:49.397920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.234 INFO: Running with entropic power schedule (0xFF, 100). 00:07:05.234 INFO: Seed: 2642601784 00:07:05.234 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:05.234 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:05.234 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:07:05.234 INFO: A corpus is not provided, starting from an empty corpus 00:07:05.234 #2 INITED exec/s: 0 rss: 67Mb 00:07:05.234 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:05.234 This may also happen if the target rejected all inputs we tried so far 00:07:05.493 [2024-09-30 21:50:49.630626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:07:05.752 NEW_FUNC[1/668]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:07:05.752 NEW_FUNC[2/668]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:05.752 #24 NEW cov: 11086 ft: 11036 corp: 2/7b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:06.011 #31 NEW cov: 11100 ft: 14830 corp: 3/13b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 2 InsertRepeatedBytes-CrossOver- 00:07:06.270 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:06.270 #40 NEW cov: 11117 ft: 15357 corp: 4/19b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 4 ChangeByte-CMP-ChangeBit-InsertByte- DE: "\000\000\000\000"- 00:07:06.270 #43 NEW cov: 11117 ft: 16428 corp: 5/25b lim: 6 exec/s: 43 rss: 75Mb L: 6/6 MS: 3 EraseBytes-CrossOver-InsertByte- 00:07:06.529 #46 NEW cov: 11117 ft: 16546 corp: 6/31b lim: 6 exec/s: 46 rss: 75Mb L: 6/6 MS: 3 ChangeBit-InsertRepeatedBytes-CopyPart- 00:07:06.789 #48 NEW cov: 11117 ft: 16688 corp: 7/37b lim: 6 exec/s: 48 rss: 75Mb L: 6/6 MS: 2 ChangeBit-CrossOver- 00:07:06.789 #49 NEW cov: 11117 ft: 17432 corp: 8/43b lim: 6 exec/s: 49 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:07.047 #53 NEW cov: 11117 ft: 17517 corp: 9/49b lim: 6 exec/s: 53 rss: 76Mb L: 6/6 MS: 4 ChangeByte-CopyPart-InsertByte-InsertRepeatedBytes- 00:07:07.305 #55 NEW cov: 11124 ft: 17674 corp: 10/55b lim: 6 exec/s: 55 rss: 76Mb L: 6/6 MS: 2 EraseBytes-CopyPart- 00:07:07.564 #56 NEW cov: 11124 ft: 17873 corp: 11/61b lim: 6 exec/s: 28 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:07:07.564 #56 DONE cov: 11124 ft: 17873 corp: 11/61b lim: 6 exec/s: 28 rss: 76Mb 00:07:07.564 ###### Recommended dictionary. ###### 00:07:07.564 "\000\000\000\000" # Uses: 0 00:07:07.564 ###### End of recommended dictionary. ###### 00:07:07.564 Done 56 runs in 2 second(s) 00:07:07.564 [2024-09-30 21:50:51.694490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:07:07.824 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:07.824 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:07.825 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:07.825 21:50:51 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:07:07.825 [2024-09-30 21:50:51.992325] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:07.825 [2024-09-30 21:50:51.992397] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056277 ] 00:07:07.825 [2024-09-30 21:50:52.064016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.825 [2024-09-30 21:50:52.135999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.084 INFO: Running with entropic power schedule (0xFF, 100). 00:07:08.084 INFO: Seed: 1093703816 00:07:08.084 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:08.084 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:08.084 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:07:08.084 INFO: A corpus is not provided, starting from an empty corpus 00:07:08.084 #2 INITED exec/s: 0 rss: 67Mb 00:07:08.084 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:08.084 This may also happen if the target rejected all inputs we tried so far 00:07:08.084 [2024-09-30 21:50:52.390106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:07:08.084 [2024-09-30 21:50:52.434333] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:08.084 [2024-09-30 21:50:52.434357] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:08.084 [2024-09-30 21:50:52.434376] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:08.603 NEW_FUNC[1/670]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:07:08.603 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:08.603 #7 NEW cov: 11075 ft: 10926 corp: 2/5b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 5 CopyPart-InsertByte-ChangeByte-ShuffleBytes-InsertByte- 00:07:08.603 [2024-09-30 21:50:52.888960] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:08.603 [2024-09-30 21:50:52.888994] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:08.603 [2024-09-30 21:50:52.889013] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:08.862 #8 NEW cov: 11089 ft: 14027 corp: 3/9b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 1 ChangeByte- 00:07:08.862 [2024-09-30 21:50:53.051787] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:08.862 [2024-09-30 21:50:53.051810] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:08.862 [2024-09-30 21:50:53.051828] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:08.862 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:08.862 #9 NEW cov: 11106 ft: 14861 corp: 4/13b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:07:08.862 [2024-09-30 21:50:53.216224] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:08.862 [2024-09-30 21:50:53.216246] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:08.862 [2024-09-30 21:50:53.216263] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:09.121 #10 NEW cov: 11109 ft: 15356 corp: 5/17b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 ChangeBit- 00:07:09.121 [2024-09-30 21:50:53.382130] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:09.121 [2024-09-30 21:50:53.382154] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:09.121 [2024-09-30 21:50:53.382171] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:09.121 #11 NEW cov: 11109 ft: 16245 corp: 6/21b lim: 4 exec/s: 11 rss: 75Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:09.381 [2024-09-30 21:50:53.548073] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:09.381 [2024-09-30 21:50:53.548096] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:09.381 [2024-09-30 21:50:53.548114] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:09.381 #15 NEW cov: 11109 ft: 16287 corp: 7/25b lim: 4 exec/s: 15 rss: 75Mb L: 4/4 MS: 4 EraseBytes-CopyPart-CrossOver-InsertByte- 00:07:09.381 [2024-09-30 21:50:53.712677] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:09.381 [2024-09-30 21:50:53.712700] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:09.381 [2024-09-30 21:50:53.712717] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:09.640 #16 NEW cov: 11109 ft: 17286 corp: 8/29b lim: 4 exec/s: 16 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:09.640 [2024-09-30 21:50:53.891049] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:09.640 [2024-09-30 21:50:53.891071] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:09.640 [2024-09-30 21:50:53.891089] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:09.640 #22 NEW cov: 11109 ft: 17800 corp: 9/33b lim: 4 exec/s: 22 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:07:09.899 [2024-09-30 21:50:54.058103] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:09.899 [2024-09-30 21:50:54.058125] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:09.900 [2024-09-30 21:50:54.058143] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:09.900 #28 NEW cov: 11116 ft: 17843 corp: 10/37b lim: 4 exec/s: 28 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:09.900 [2024-09-30 21:50:54.222073] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:09.900 [2024-09-30 21:50:54.222097] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:09.900 [2024-09-30 21:50:54.222117] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:10.159 #39 NEW cov: 11116 ft: 17920 corp: 11/41b lim: 4 exec/s: 39 rss: 76Mb L: 4/4 MS: 1 ShuffleBytes- 00:07:10.159 [2024-09-30 21:50:54.386861] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:07:10.159 [2024-09-30 21:50:54.386883] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:07:10.159 [2024-09-30 21:50:54.386901] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:07:10.159 #40 NEW cov: 11116 ft: 17995 corp: 12/45b lim: 4 exec/s: 20 rss: 76Mb L: 4/4 MS: 1 ChangeByte- 00:07:10.159 #40 DONE cov: 11116 ft: 17995 corp: 12/45b lim: 4 exec/s: 20 rss: 76Mb 00:07:10.159 Done 40 runs in 2 second(s) 00:07:10.159 [2024-09-30 21:50:54.504490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:07:10.419 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:10.419 21:50:54 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:07:10.679 [2024-09-30 21:50:54.795919] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:10.679 [2024-09-30 21:50:54.796000] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056703 ] 00:07:10.679 [2024-09-30 21:50:54.868390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.679 [2024-09-30 21:50:54.941104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.938 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.938 INFO: Seed: 3898649693 00:07:10.938 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:10.938 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:10.938 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:07:10.938 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.938 #2 INITED exec/s: 0 rss: 67Mb 00:07:10.938 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:10.938 This may also happen if the target rejected all inputs we tried so far 00:07:10.938 [2024-09-30 21:50:55.188415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:07:10.938 [2024-09-30 21:50:55.242389] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: cmd 5 failed: Invalid argument 00:07:10.938 [2024-09-30 21:50:55.242425] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:11.454 NEW_FUNC[1/663]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:07:11.454 NEW_FUNC[2/663]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:11.454 #36 NEW cov: 10859 ft: 11034 corp: 2/9b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 4 ChangeBinInt-InsertRepeatedBytes-InsertRepeatedBytes-InsertByte- 00:07:11.454 [2024-09-30 21:50:55.694214] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:11.454 NEW_FUNC[1/7]: 0x443a18 in write_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:353 00:07:11.454 NEW_FUNC[2/7]: 0x452f38 in bdev_malloc_submit_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/bdev/malloc/bdev_malloc.c:549 00:07:11.454 #40 NEW cov: 11089 ft: 14365 corp: 3/17b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 4 EraseBytes-CrossOver-ShuffleBytes-CopyPart- 00:07:11.735 [2024-09-30 21:50:55.878627] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:11.735 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:11.735 #45 NEW cov: 11106 ft: 15234 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 5 EraseBytes-ChangeByte-EraseBytes-CopyPart-CrossOver- 00:07:11.735 [2024-09-30 21:50:56.057083] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:11.735 [2024-09-30 21:50:56.057117] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:11.994 #47 NEW cov: 11106 ft: 16206 corp: 5/33b lim: 8 exec/s: 47 rss: 75Mb L: 8/8 MS: 2 CrossOver-CopyPart- 00:07:11.994 [2024-09-30 21:50:56.232478] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:11.994 #48 NEW cov: 11106 ft: 16825 corp: 6/41b lim: 8 exec/s: 48 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:12.252 [2024-09-30 21:50:56.410617] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:12.252 #49 NEW cov: 11106 ft: 17208 corp: 7/49b lim: 8 exec/s: 49 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:07:12.252 [2024-09-30 21:50:56.588844] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:12.252 [2024-09-30 21:50:56.588875] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:12.511 #50 NEW cov: 11106 ft: 17372 corp: 8/57b lim: 8 exec/s: 50 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:12.511 [2024-09-30 21:50:56.767755] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:07:12.511 #51 NEW cov: 11106 ft: 17463 corp: 9/65b lim: 8 exec/s: 51 rss: 75Mb L: 8/8 MS: 1 ChangeByte- 00:07:12.770 [2024-09-30 21:50:56.944887] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: no payload for cmd5 00:07:12.770 [2024-09-30 21:50:56.944918] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:12.770 #57 NEW cov: 11113 ft: 17497 corp: 10/73b lim: 8 exec/s: 57 rss: 75Mb L: 8/8 MS: 1 CopyPart- 00:07:12.770 [2024-09-30 21:50:57.118407] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-2/domain/1: msg0: cmd 5 failed: Invalid argument 00:07:12.770 [2024-09-30 21:50:57.118438] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 5 return failure 00:07:13.029 #58 NEW cov: 11113 ft: 17847 corp: 11/81b lim: 8 exec/s: 29 rss: 75Mb L: 8/8 MS: 1 ChangeBit- 00:07:13.029 #58 DONE cov: 11113 ft: 17847 corp: 11/81b lim: 8 exec/s: 29 rss: 75Mb 00:07:13.029 Done 58 runs in 2 second(s) 00:07:13.029 [2024-09-30 21:50:57.244500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:07:13.289 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:13.289 21:50:57 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:07:13.289 [2024-09-30 21:50:57.543156] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:13.289 [2024-09-30 21:50:57.543230] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057244 ] 00:07:13.289 [2024-09-30 21:50:57.615153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.548 [2024-09-30 21:50:57.689272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.548 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.548 INFO: Seed: 2347700354 00:07:13.548 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:13.548 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:13.548 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:07:13.548 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.548 #2 INITED exec/s: 0 rss: 67Mb 00:07:13.548 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.548 This may also happen if the target rejected all inputs we tried so far 00:07:13.806 [2024-09-30 21:50:57.933829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:07:14.064 NEW_FUNC[1/669]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:07:14.064 NEW_FUNC[2/669]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:14.064 #36 NEW cov: 11057 ft: 10901 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 ChangeByte-InsertRepeatedBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:14.323 #37 NEW cov: 11074 ft: 13809 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:07:14.581 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:14.582 #38 NEW cov: 11094 ft: 14254 corp: 4/97b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:14.582 #49 NEW cov: 11094 ft: 16406 corp: 5/129b lim: 32 exec/s: 49 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:14.839 #50 NEW cov: 11094 ft: 16532 corp: 6/161b lim: 32 exec/s: 50 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:15.097 #51 NEW cov: 11094 ft: 16719 corp: 7/193b lim: 32 exec/s: 51 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:15.097 #52 NEW cov: 11094 ft: 16779 corp: 8/225b lim: 32 exec/s: 52 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:15.356 #53 NEW cov: 11094 ft: 17529 corp: 9/257b lim: 32 exec/s: 53 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:15.614 #59 NEW cov: 11101 ft: 17572 corp: 10/289b lim: 32 exec/s: 59 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:15.614 #65 NEW cov: 11101 ft: 17882 corp: 11/321b lim: 32 exec/s: 32 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:15.614 #65 DONE cov: 11101 ft: 17882 corp: 11/321b lim: 32 exec/s: 32 rss: 75Mb 00:07:15.614 Done 65 runs in 2 second(s) 00:07:15.614 [2024-09-30 21:50:59.968487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:15.872 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:15.873 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:07:15.873 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:16.132 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:16.132 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:16.132 21:51:00 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:07:16.132 [2024-09-30 21:51:00.270178] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:16.132 [2024-09-30 21:51:00.270247] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057805 ] 00:07:16.132 [2024-09-30 21:51:00.342828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.132 [2024-09-30 21:51:00.416894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.391 INFO: Running with entropic power schedule (0xFF, 100). 00:07:16.391 INFO: Seed: 784703076 00:07:16.391 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:16.391 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:16.391 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:07:16.391 INFO: A corpus is not provided, starting from an empty corpus 00:07:16.391 #2 INITED exec/s: 0 rss: 67Mb 00:07:16.391 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:16.391 This may also happen if the target rejected all inputs we tried so far 00:07:16.391 [2024-09-30 21:51:00.665273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:07:16.909 NEW_FUNC[1/669]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:07:16.909 NEW_FUNC[2/669]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:16.909 #147 NEW cov: 11072 ft: 10970 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 5 InsertRepeatedBytes-ChangeBinInt-ChangeBinInt-ChangeBit-InsertByte- 00:07:17.168 #173 NEW cov: 11086 ft: 14124 corp: 3/65b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:17.168 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:17.168 #174 NEW cov: 11103 ft: 16030 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:07:17.427 #189 NEW cov: 11106 ft: 16422 corp: 5/129b lim: 32 exec/s: 189 rss: 75Mb L: 32/32 MS: 5 EraseBytes-CopyPart-ChangeBit-CrossOver-InsertRepeatedBytes- 00:07:17.687 #190 NEW cov: 11106 ft: 16642 corp: 6/161b lim: 32 exec/s: 190 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:17.946 #196 NEW cov: 11106 ft: 16907 corp: 7/193b lim: 32 exec/s: 196 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:17.946 #197 NEW cov: 11106 ft: 17076 corp: 8/225b lim: 32 exec/s: 197 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:07:18.205 #203 NEW cov: 11106 ft: 17698 corp: 9/257b lim: 32 exec/s: 203 rss: 75Mb L: 32/32 MS: 1 ShuffleBytes- 00:07:18.465 #204 NEW cov: 11113 ft: 18087 corp: 10/289b lim: 32 exec/s: 204 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:07:18.465 #205 NEW cov: 11113 ft: 18279 corp: 11/321b lim: 32 exec/s: 102 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:07:18.465 #205 DONE cov: 11113 ft: 18279 corp: 11/321b lim: 32 exec/s: 102 rss: 75Mb 00:07:18.465 Done 205 runs in 2 second(s) 00:07:18.465 [2024-09-30 21:51:02.792493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:07:18.725 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:18.725 21:51:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:07:18.725 [2024-09-30 21:51:03.086545] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:18.725 [2024-09-30 21:51:03.086638] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058441 ] 00:07:18.985 [2024-09-30 21:51:03.157334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.985 [2024-09-30 21:51:03.228816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.244 INFO: Running with entropic power schedule (0xFF, 100). 00:07:19.245 INFO: Seed: 3591701304 00:07:19.245 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:19.245 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:19.245 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:07:19.245 INFO: A corpus is not provided, starting from an empty corpus 00:07:19.245 #2 INITED exec/s: 0 rss: 67Mb 00:07:19.245 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:19.245 This may also happen if the target rejected all inputs we tried so far 00:07:19.245 [2024-09-30 21:51:03.464974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:07:19.245 [2024-09-30 21:51:03.520354] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:19.245 [2024-09-30 21:51:03.520391] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:19.763 NEW_FUNC[1/670]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:07:19.763 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:19.763 #85 NEW cov: 11084 ft: 10961 corp: 2/14b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 3 InsertRepeatedBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:19.763 [2024-09-30 21:51:03.970778] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:19.763 [2024-09-30 21:51:03.970823] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:19.763 #86 NEW cov: 11098 ft: 14250 corp: 3/27b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:07:19.763 [2024-09-30 21:51:04.131542] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:19.763 [2024-09-30 21:51:04.131576] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:20.022 #87 NEW cov: 11098 ft: 15296 corp: 4/40b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ShuffleBytes- 00:07:20.022 [2024-09-30 21:51:04.293375] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:20.022 [2024-09-30 21:51:04.293406] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:20.281 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:20.281 #88 NEW cov: 11115 ft: 16169 corp: 5/53b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBinInt- 00:07:20.281 [2024-09-30 21:51:04.455170] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:20.281 [2024-09-30 21:51:04.455205] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:20.281 #89 NEW cov: 11115 ft: 16238 corp: 6/66b lim: 13 exec/s: 89 rss: 75Mb L: 13/13 MS: 1 CrossOver- 00:07:20.281 [2024-09-30 21:51:04.612991] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:20.281 [2024-09-30 21:51:04.613022] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:20.541 #95 NEW cov: 11115 ft: 16348 corp: 7/79b lim: 13 exec/s: 95 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:20.541 [2024-09-30 21:51:04.773294] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:20.541 [2024-09-30 21:51:04.773335] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:20.541 #96 NEW cov: 11115 ft: 16432 corp: 8/92b lim: 13 exec/s: 96 rss: 75Mb L: 13/13 MS: 1 CopyPart- 00:07:20.800 [2024-09-30 21:51:04.933121] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:20.800 [2024-09-30 21:51:04.933151] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:20.800 #97 NEW cov: 11115 ft: 16443 corp: 9/105b lim: 13 exec/s: 97 rss: 75Mb L: 13/13 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:20.800 [2024-09-30 21:51:05.088913] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:20.800 [2024-09-30 21:51:05.088944] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:21.059 #98 NEW cov: 11115 ft: 16467 corp: 10/118b lim: 13 exec/s: 98 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:07:21.059 [2024-09-30 21:51:05.249119] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:21.059 [2024-09-30 21:51:05.249150] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:21.059 #99 NEW cov: 11122 ft: 16808 corp: 11/131b lim: 13 exec/s: 99 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:07:21.059 [2024-09-30 21:51:05.409343] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:21.059 [2024-09-30 21:51:05.409383] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:21.319 #100 NEW cov: 11122 ft: 16881 corp: 12/144b lim: 13 exec/s: 50 rss: 76Mb L: 13/13 MS: 1 CMP- DE: "\001\001"- 00:07:21.319 #100 DONE cov: 11122 ft: 16881 corp: 12/144b lim: 13 exec/s: 50 rss: 76Mb 00:07:21.319 ###### Recommended dictionary. ###### 00:07:21.319 "\001\000\000\000" # Uses: 0 00:07:21.319 "\001\001" # Uses: 0 00:07:21.319 ###### End of recommended dictionary. ###### 00:07:21.319 Done 100 runs in 2 second(s) 00:07:21.319 [2024-09-30 21:51:05.521514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:07:21.579 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:07:21.579 21:51:05 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:07:21.579 [2024-09-30 21:51:05.808221] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:21.579 [2024-09-30 21:51:05.808293] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059219 ] 00:07:21.579 [2024-09-30 21:51:05.878542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.839 [2024-09-30 21:51:05.950186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.839 INFO: Running with entropic power schedule (0xFF, 100). 00:07:21.839 INFO: Seed: 2016745653 00:07:21.839 INFO: Loaded 1 modules (381192 inline 8-bit counters): 381192 [0x2ba494c, 0x2c01a54), 00:07:21.839 INFO: Loaded 1 PC tables (381192 PCs): 381192 [0x2c01a58,0x31d2ad8), 00:07:21.839 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:07:21.839 INFO: A corpus is not provided, starting from an empty corpus 00:07:21.839 #2 INITED exec/s: 0 rss: 67Mb 00:07:21.839 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:21.839 This may also happen if the target rejected all inputs we tried so far 00:07:21.839 [2024-09-30 21:51:06.195260] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:07:22.098 [2024-09-30 21:51:06.248350] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.098 [2024-09-30 21:51:06.248381] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.358 NEW_FUNC[1/669]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:07:22.358 NEW_FUNC[2/669]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:07:22.358 #37 NEW cov: 11070 ft: 10948 corp: 2/10b lim: 9 exec/s: 0 rss: 73Mb L: 9/9 MS: 5 CopyPart-InsertByte-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:07:22.358 [2024-09-30 21:51:06.702427] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.358 [2024-09-30 21:51:06.702469] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.617 NEW_FUNC[1/1]: 0x1568a08 in sq_headp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:565 00:07:22.617 #43 NEW cov: 11086 ft: 14435 corp: 3/19b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:22.617 [2024-09-30 21:51:06.891203] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.617 [2024-09-30 21:51:06.891236] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.876 NEW_FUNC[1/1]: 0x1bc3878 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:656 00:07:22.876 #49 NEW cov: 11103 ft: 14865 corp: 4/28b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:22.876 [2024-09-30 21:51:07.070752] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:22.876 [2024-09-30 21:51:07.070783] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:22.876 #50 NEW cov: 11103 ft: 15421 corp: 5/37b lim: 9 exec/s: 50 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:07:23.136 [2024-09-30 21:51:07.253775] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.136 [2024-09-30 21:51:07.253806] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.136 #51 NEW cov: 11103 ft: 15642 corp: 6/46b lim: 9 exec/s: 51 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:23.136 [2024-09-30 21:51:07.436569] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.136 [2024-09-30 21:51:07.436601] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.395 #56 NEW cov: 11103 ft: 16243 corp: 7/55b lim: 9 exec/s: 56 rss: 75Mb L: 9/9 MS: 5 EraseBytes-CopyPart-CopyPart-ChangeBit-InsertByte- 00:07:23.395 [2024-09-30 21:51:07.616497] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.395 [2024-09-30 21:51:07.616531] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.395 #67 NEW cov: 11103 ft: 17115 corp: 8/64b lim: 9 exec/s: 67 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:07:23.655 [2024-09-30 21:51:07.792757] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.655 [2024-09-30 21:51:07.792786] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.655 #68 NEW cov: 11103 ft: 17378 corp: 9/73b lim: 9 exec/s: 68 rss: 75Mb L: 9/9 MS: 1 ChangeBit- 00:07:23.655 [2024-09-30 21:51:07.971589] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.655 [2024-09-30 21:51:07.971619] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.915 #69 NEW cov: 11110 ft: 17561 corp: 10/82b lim: 9 exec/s: 69 rss: 75Mb L: 9/9 MS: 1 CopyPart- 00:07:23.915 [2024-09-30 21:51:08.152153] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:07:23.915 [2024-09-30 21:51:08.152183] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:07:23.915 #70 NEW cov: 11110 ft: 17572 corp: 11/91b lim: 9 exec/s: 35 rss: 75Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:23.915 #70 DONE cov: 11110 ft: 17572 corp: 11/91b lim: 9 exec/s: 35 rss: 75Mb 00:07:23.915 Done 70 runs in 2 second(s) 00:07:23.915 [2024-09-30 21:51:08.278495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:07:24.173 21:51:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:07:24.173 21:51:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:24.173 21:51:08 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:24.173 21:51:08 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:07:24.173 00:07:24.173 real 0m19.783s 00:07:24.173 user 0m27.534s 00:07:24.173 sys 0m1.939s 00:07:24.173 21:51:08 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.173 21:51:08 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:24.173 ************************************ 00:07:24.173 END TEST vfio_llvm_fuzz 00:07:24.173 ************************************ 00:07:24.432 00:07:24.432 real 1m24.512s 00:07:24.432 user 2m8.530s 00:07:24.432 sys 0m9.340s 00:07:24.432 21:51:08 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.432 21:51:08 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:24.432 ************************************ 00:07:24.432 END TEST llvm_fuzz 00:07:24.432 ************************************ 00:07:24.432 21:51:08 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:07:24.432 21:51:08 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:07:24.432 21:51:08 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:07:24.432 21:51:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.432 21:51:08 -- common/autotest_common.sh@10 -- # set +x 00:07:24.432 21:51:08 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:07:24.432 21:51:08 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:07:24.432 21:51:08 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:07:24.432 21:51:08 -- common/autotest_common.sh@10 -- # set +x 00:07:31.099 INFO: APP EXITING 00:07:31.099 INFO: killing all VMs 00:07:31.099 INFO: killing vhost app 00:07:31.099 INFO: EXIT DONE 00:07:33.005 Waiting for block devices as requested 00:07:33.005 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:33.264 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:33.264 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:33.264 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:33.523 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:33.524 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:33.524 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:33.524 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:33.782 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:33.782 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:33.782 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:34.041 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:34.041 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:34.041 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:34.299 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:34.299 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:34.299 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:07:37.592 Cleaning 00:07:37.592 Removing: /dev/shm/spdk_tgt_trace.pid1030621 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1028156 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1029294 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1030621 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1031086 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1032169 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1032200 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1033304 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1033316 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1033746 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1034074 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1034397 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1034747 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1035073 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1035358 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1035641 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1035963 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1036724 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1039767 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1040057 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1040353 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1040361 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1040928 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1041021 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1041686 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1041755 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1042063 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1042068 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1042358 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1042367 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1043001 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1043283 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1043537 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1043650 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1044403 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1044764 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1045224 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1045758 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1046121 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1046579 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1047118 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1047449 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1047937 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1048472 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1048767 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1049298 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1049827 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1050135 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1050646 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1051174 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1051472 00:07:37.592 Removing: /var/run/dpdk/spdk_pid1052007 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1052498 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1052832 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1053365 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1053873 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1054188 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1054725 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1055254 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1055880 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1056277 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1056703 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1057244 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1057805 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1058441 00:07:37.851 Removing: /var/run/dpdk/spdk_pid1059219 00:07:37.851 Clean 00:07:37.851 21:51:22 -- common/autotest_common.sh@1451 -- # return 0 00:07:37.851 21:51:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:07:37.851 21:51:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.851 21:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:37.851 21:51:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:07:37.851 21:51:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.851 21:51:22 -- common/autotest_common.sh@10 -- # set +x 00:07:37.851 21:51:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:07:37.851 21:51:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:07:37.851 21:51:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:07:37.851 21:51:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:07:37.851 21:51:22 -- spdk/autotest.sh@394 -- # hostname 00:07:37.851 21:51:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:07:38.108 geninfo: WARNING: invalid characters removed from testname! 00:07:44.679 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:07:46.059 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:07:50.254 21:51:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:07:58.380 21:51:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:03.653 21:51:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:08.927 21:51:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:14.203 21:51:57 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:18.396 21:52:02 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:08:23.667 21:52:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:08:23.667 21:52:07 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:08:23.667 21:52:08 -- common/autotest_common.sh@1681 -- $ lcov --version 00:08:23.667 21:52:08 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:08:23.927 21:52:08 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:08:23.927 21:52:08 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:08:23.927 21:52:08 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:08:23.927 21:52:08 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:08:23.927 21:52:08 -- scripts/common.sh@336 -- $ IFS=.-: 00:08:23.927 21:52:08 -- scripts/common.sh@336 -- $ read -ra ver1 00:08:23.927 21:52:08 -- scripts/common.sh@337 -- $ IFS=.-: 00:08:23.927 21:52:08 -- scripts/common.sh@337 -- $ read -ra ver2 00:08:23.927 21:52:08 -- scripts/common.sh@338 -- $ local 'op=<' 00:08:23.927 21:52:08 -- scripts/common.sh@340 -- $ ver1_l=2 00:08:23.927 21:52:08 -- scripts/common.sh@341 -- $ ver2_l=1 00:08:23.927 21:52:08 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:08:23.927 21:52:08 -- scripts/common.sh@344 -- $ case "$op" in 00:08:23.927 21:52:08 -- scripts/common.sh@345 -- $ : 1 00:08:23.927 21:52:08 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:08:23.927 21:52:08 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.927 21:52:08 -- scripts/common.sh@365 -- $ decimal 1 00:08:23.927 21:52:08 -- scripts/common.sh@353 -- $ local d=1 00:08:23.927 21:52:08 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:08:23.927 21:52:08 -- scripts/common.sh@355 -- $ echo 1 00:08:23.927 21:52:08 -- scripts/common.sh@365 -- $ ver1[v]=1 00:08:23.927 21:52:08 -- scripts/common.sh@366 -- $ decimal 2 00:08:23.927 21:52:08 -- scripts/common.sh@353 -- $ local d=2 00:08:23.927 21:52:08 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:08:23.927 21:52:08 -- scripts/common.sh@355 -- $ echo 2 00:08:23.927 21:52:08 -- scripts/common.sh@366 -- $ ver2[v]=2 00:08:23.927 21:52:08 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:08:23.927 21:52:08 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:08:23.927 21:52:08 -- scripts/common.sh@368 -- $ return 0 00:08:23.927 21:52:08 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.927 21:52:08 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:08:23.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.927 --rc genhtml_branch_coverage=1 00:08:23.927 --rc genhtml_function_coverage=1 00:08:23.927 --rc genhtml_legend=1 00:08:23.927 --rc geninfo_all_blocks=1 00:08:23.927 --rc geninfo_unexecuted_blocks=1 00:08:23.927 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.927 ' 00:08:23.927 21:52:08 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:08:23.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.927 --rc genhtml_branch_coverage=1 00:08:23.927 --rc genhtml_function_coverage=1 00:08:23.927 --rc genhtml_legend=1 00:08:23.928 --rc geninfo_all_blocks=1 00:08:23.928 --rc geninfo_unexecuted_blocks=1 00:08:23.928 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.928 ' 00:08:23.928 21:52:08 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:08:23.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.928 --rc genhtml_branch_coverage=1 00:08:23.928 --rc genhtml_function_coverage=1 00:08:23.928 --rc genhtml_legend=1 00:08:23.928 --rc geninfo_all_blocks=1 00:08:23.928 --rc geninfo_unexecuted_blocks=1 00:08:23.928 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.928 ' 00:08:23.928 21:52:08 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:08:23.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.928 --rc genhtml_branch_coverage=1 00:08:23.928 --rc genhtml_function_coverage=1 00:08:23.928 --rc genhtml_legend=1 00:08:23.928 --rc geninfo_all_blocks=1 00:08:23.928 --rc geninfo_unexecuted_blocks=1 00:08:23.928 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:23.928 ' 00:08:23.928 21:52:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:23.928 21:52:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:23.928 21:52:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:23.928 21:52:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.928 21:52:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.928 21:52:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.928 21:52:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.928 21:52:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.928 21:52:08 -- paths/export.sh@5 -- $ export PATH 00:08:23.928 21:52:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.928 21:52:08 -- common/autobuild_common.sh@478 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:08:23.928 21:52:08 -- common/autobuild_common.sh@479 -- $ date +%s 00:08:23.928 21:52:08 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727725928.XXXXXX 00:08:23.928 21:52:08 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727725928.M4XPw6 00:08:23.928 21:52:08 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:08:23.928 21:52:08 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:08:23.928 21:52:08 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:08:23.928 21:52:08 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:23.928 21:52:08 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:23.928 21:52:08 -- common/autobuild_common.sh@495 -- $ get_config_params 00:08:23.928 21:52:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:08:23.928 21:52:08 -- common/autotest_common.sh@10 -- $ set +x 00:08:23.928 21:52:08 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:23.928 21:52:08 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:08:23.928 21:52:08 -- pm/common@17 -- $ local monitor 00:08:23.928 21:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:23.928 21:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:23.928 21:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:23.928 21:52:08 -- pm/common@21 -- $ date +%s 00:08:23.928 21:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:23.928 21:52:08 -- pm/common@21 -- $ date +%s 00:08:23.928 21:52:08 -- pm/common@25 -- $ sleep 1 00:08:23.928 21:52:08 -- pm/common@21 -- $ date +%s 00:08:23.928 21:52:08 -- pm/common@21 -- $ date +%s 00:08:23.928 21:52:08 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727725928 00:08:23.928 21:52:08 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727725928 00:08:23.928 21:52:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727725928 00:08:23.928 21:52:08 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1727725928 00:08:23.928 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727725928_collect-vmstat.pm.log 00:08:23.928 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727725928_collect-cpu-load.pm.log 00:08:23.928 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727725928_collect-cpu-temp.pm.log 00:08:23.928 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1727725928_collect-bmc-pm.bmc.pm.log 00:08:24.863 21:52:09 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:08:24.864 21:52:09 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:08:24.864 21:52:09 -- spdk/autopackage.sh@14 -- $ timing_finish 00:08:24.864 21:52:09 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:08:24.864 21:52:09 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:08:24.864 21:52:09 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:24.864 21:52:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:08:24.864 21:52:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:24.864 21:52:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:24.864 21:52:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:24.864 21:52:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:24.864 21:52:09 -- pm/common@44 -- $ pid=1067568 00:08:24.864 21:52:09 -- pm/common@50 -- $ kill -TERM 1067568 00:08:24.864 21:52:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:24.864 21:52:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:24.864 21:52:09 -- pm/common@44 -- $ pid=1067570 00:08:24.864 21:52:09 -- pm/common@50 -- $ kill -TERM 1067570 00:08:24.864 21:52:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:24.864 21:52:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:24.864 21:52:09 -- pm/common@44 -- $ pid=1067572 00:08:24.864 21:52:09 -- pm/common@50 -- $ kill -TERM 1067572 00:08:24.864 21:52:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:24.864 21:52:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:24.864 21:52:09 -- pm/common@44 -- $ pid=1067596 00:08:24.864 21:52:09 -- pm/common@50 -- $ sudo -E kill -TERM 1067596 00:08:24.864 + [[ -n 917943 ]] 00:08:24.864 + sudo kill 917943 00:08:25.132 [Pipeline] } 00:08:25.147 [Pipeline] // stage 00:08:25.152 [Pipeline] } 00:08:25.166 [Pipeline] // timeout 00:08:25.171 [Pipeline] } 00:08:25.184 [Pipeline] // catchError 00:08:25.190 [Pipeline] } 00:08:25.203 [Pipeline] // wrap 00:08:25.208 [Pipeline] } 00:08:25.220 [Pipeline] // catchError 00:08:25.227 [Pipeline] stage 00:08:25.229 [Pipeline] { (Epilogue) 00:08:25.242 [Pipeline] catchError 00:08:25.244 [Pipeline] { 00:08:25.256 [Pipeline] echo 00:08:25.258 Cleanup processes 00:08:25.265 [Pipeline] sh 00:08:25.551 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:25.551 1067709 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:08:25.551 1068132 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:25.568 [Pipeline] sh 00:08:25.850 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:25.850 ++ grep -v 'sudo pgrep' 00:08:25.850 ++ awk '{print $1}' 00:08:25.850 + sudo kill -9 1067709 00:08:25.862 [Pipeline] sh 00:08:26.150 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:08:26.150 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:26.150 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:27.530 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:08:37.524 [Pipeline] sh 00:08:37.906 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:08:37.906 Artifacts sizes are good 00:08:37.955 [Pipeline] archiveArtifacts 00:08:37.962 Archiving artifacts 00:08:38.116 [Pipeline] sh 00:08:38.410 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:08:38.425 [Pipeline] cleanWs 00:08:38.434 [WS-CLEANUP] Deleting project workspace... 00:08:38.434 [WS-CLEANUP] Deferred wipeout is used... 00:08:38.441 [WS-CLEANUP] done 00:08:38.443 [Pipeline] } 00:08:38.464 [Pipeline] // catchError 00:08:38.479 [Pipeline] sh 00:08:38.768 + logger -p user.info -t JENKINS-CI 00:08:38.778 [Pipeline] } 00:08:38.791 [Pipeline] // stage 00:08:38.796 [Pipeline] } 00:08:38.811 [Pipeline] // node 00:08:38.816 [Pipeline] End of Pipeline 00:08:38.862 Finished: SUCCESS