00:00:00.000  Started by upstream project "autotest-per-patch" build number 132319
00:00:00.000  originally caused by:
00:00:00.001   Started by upstream project "jbp-per-patch" build number 25764
00:00:00.001   originally caused by:
00:00:00.001    Started by user sys_sgci
00:00:00.015  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vhost-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:01.611  The recommended git tool is: git
00:00:01.612  using credential 00000000-0000-0000-0000-000000000002
00:00:01.614   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vhost-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:01.625  Fetching changes from the remote Git repository
00:00:01.627   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:01.638  Using shallow fetch with depth 1
00:00:01.638  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:01.638   > git --version # timeout=10
00:00:01.648   > git --version # 'git version 2.39.2'
00:00:01.648  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:01.659  Setting http proxy: proxy-dmz.intel.com:911
00:00:01.659   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/84/24384/13 # timeout=5
00:00:05.920   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:05.933   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:05.946  Checking out Revision 6d4840695fb479ead742a39eb3a563a20cd15407 (FETCH_HEAD)
00:00:05.946   > git config core.sparsecheckout # timeout=10
00:00:05.958   > git read-tree -mu HEAD # timeout=10
00:00:05.974   > git checkout -f 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=5
00:00:05.997  Commit message: "jenkins/jjb-config: Commonize distro-based params"
00:00:05.997   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:06.088  [Pipeline] Start of Pipeline
00:00:06.103  [Pipeline] library
00:00:06.105  Loading library shm_lib@master
00:00:06.105  Library shm_lib@master is cached. Copying from home.
00:00:06.121  [Pipeline] node
00:00:06.137  Running on WFP29 in /var/jenkins/workspace/vhost-phy-autotest
00:00:06.139  [Pipeline] {
00:00:06.148  [Pipeline] catchError
00:00:06.150  [Pipeline] {
00:00:06.162  [Pipeline] wrap
00:00:06.168  [Pipeline] {
00:00:06.176  [Pipeline] stage
00:00:06.177  [Pipeline] { (Prologue)
00:00:06.390  [Pipeline] sh
00:00:06.703  + logger -p user.info -t JENKINS-CI
00:00:06.724  [Pipeline] echo
00:00:06.726  Node: WFP29
00:00:06.735  [Pipeline] sh
00:00:07.035  [Pipeline] setCustomBuildProperty
00:00:07.046  [Pipeline] echo
00:00:07.048  Cleanup processes
00:00:07.052  [Pipeline] sh
00:00:07.335  + sudo pgrep -af /var/jenkins/workspace/vhost-phy-autotest/spdk
00:00:07.335  1758494 sudo pgrep -af /var/jenkins/workspace/vhost-phy-autotest/spdk
00:00:07.348  [Pipeline] sh
00:00:07.632  ++ sudo pgrep -af /var/jenkins/workspace/vhost-phy-autotest/spdk
00:00:07.632  ++ grep -v 'sudo pgrep'
00:00:07.632  ++ awk '{print $1}'
00:00:07.632  + sudo kill -9
00:00:07.632  + true
00:00:07.647  [Pipeline] cleanWs
00:00:07.656  [WS-CLEANUP] Deleting project workspace...
00:00:07.656  [WS-CLEANUP] Deferred wipeout is used...
00:00:07.662  [WS-CLEANUP] done
00:00:07.667  [Pipeline] setCustomBuildProperty
00:00:07.681  [Pipeline] sh
00:00:07.965  + sudo git config --global --replace-all safe.directory '*'
00:00:08.061  [Pipeline] httpRequest
00:00:08.366  [Pipeline] echo
00:00:08.368  Sorcerer 10.211.164.20 is alive
00:00:08.377  [Pipeline] retry
00:00:08.379  [Pipeline] {
00:00:08.390  [Pipeline] httpRequest
00:00:08.395  HttpMethod: GET
00:00:08.396  URL: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz
00:00:08.396  Sending request to url: http://10.211.164.20/packages/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz
00:00:08.401  Response Code: HTTP/1.1 200 OK
00:00:08.402  Success: Status code 200 is in the accepted range: 200,404
00:00:08.402  Saving response body to /var/jenkins/workspace/vhost-phy-autotest/jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz
00:00:18.166  [Pipeline] }
00:00:18.183  [Pipeline] // retry
00:00:18.190  [Pipeline] sh
00:00:18.475  + tar --no-same-owner -xf jbp_6d4840695fb479ead742a39eb3a563a20cd15407.tar.gz
00:00:18.491  [Pipeline] httpRequest
00:00:19.133  [Pipeline] echo
00:00:19.135  Sorcerer 10.211.164.20 is alive
00:00:19.144  [Pipeline] retry
00:00:19.146  [Pipeline] {
00:00:19.159  [Pipeline] httpRequest
00:00:19.164  HttpMethod: GET
00:00:19.164  URL: http://10.211.164.20/packages/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz
00:00:19.164  Sending request to url: http://10.211.164.20/packages/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz
00:00:19.169  Response Code: HTTP/1.1 200 OK
00:00:19.170  Success: Status code 200 is in the accepted range: 200,404
00:00:19.170  Saving response body to /var/jenkins/workspace/vhost-phy-autotest/spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz
00:04:10.999  [Pipeline] }
00:04:11.018  [Pipeline] // retry
00:04:11.026  [Pipeline] sh
00:04:11.312  + tar --no-same-owner -xf spdk_a0c128549ce17427c3a035fd0ecce392e10dce99.tar.gz
00:04:13.862  [Pipeline] sh
00:04:14.145  + git -C spdk log --oneline -n5
00:04:14.145  a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public
00:04:14.145  53ca6a885 bdev/nvme: Rearrange fields in spdk_bdev_nvme_opts to reduce holes.
00:04:14.145  03b7aa9c7 bdev/nvme: Move the spdk_bdev_nvme_opts and spdk_bdev_timeout_action struct to the public header.
00:04:14.145  d47eb51c9 bdev: fix a race between reset start and complete
00:04:14.145  83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process
00:04:14.156  [Pipeline] }
00:04:14.173  [Pipeline] // stage
00:04:14.183  [Pipeline] stage
00:04:14.186  [Pipeline] { (Prepare)
00:04:14.205  [Pipeline] writeFile
00:04:14.224  [Pipeline] sh
00:04:14.506  + logger -p user.info -t JENKINS-CI
00:04:14.517  [Pipeline] sh
00:04:14.800  + logger -p user.info -t JENKINS-CI
00:04:14.811  [Pipeline] sh
00:04:15.093  + cat autorun-spdk.conf
00:04:15.093  SPDK_RUN_FUNCTIONAL_TEST=1
00:04:15.093  SPDK_TEST_VHOST=1
00:04:15.093  SPDK_RUN_ASAN=1
00:04:15.093  SPDK_RUN_UBSAN=1
00:04:15.101  RUN_NIGHTLY=0
00:04:15.105  [Pipeline] readFile
00:04:15.132  [Pipeline] copyArtifacts
00:04:15.156  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:04:15.160  [Pipeline] sh
00:04:15.445  + sudo mkdir -p /var/spdk/dependencies/vhost
00:04:15.456  [Pipeline] sh
00:04:15.736  + cd /var/spdk/dependencies/vhost
00:04:15.736  + md5sum --quiet -c /var/jenkins/workspace/vhost-phy-autotest/spdk_test_image.qcow2.gz.md5
00:04:19.040  [Pipeline] withEnv
00:04:19.042  [Pipeline] {
00:04:19.055  [Pipeline] sh
00:04:19.345  + set -ex
00:04:19.345  + [[ -f /var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf ]]
00:04:19.345  + source /var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf
00:04:19.345  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:04:19.345  ++ SPDK_TEST_VHOST=1
00:04:19.345  ++ SPDK_RUN_ASAN=1
00:04:19.345  ++ SPDK_RUN_UBSAN=1
00:04:19.345  ++ RUN_NIGHTLY=0
00:04:19.345  + case $SPDK_TEST_NVMF_NICS in
00:04:19.345  + DRIVERS=
00:04:19.345  + [[ -n '' ]]
00:04:19.345  + exit 0
00:04:19.355  [Pipeline] }
00:04:19.370  [Pipeline] // withEnv
00:04:19.376  [Pipeline] }
00:04:19.390  [Pipeline] // stage
00:04:19.401  [Pipeline] catchError
00:04:19.403  [Pipeline] {
00:04:19.418  [Pipeline] timeout
00:04:19.419  Timeout set to expire in 40 min
00:04:19.420  [Pipeline] {
00:04:19.434  [Pipeline] stage
00:04:19.435  [Pipeline] { (Tests)
00:04:19.451  [Pipeline] sh
00:04:19.771  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vhost-phy-autotest
00:04:19.771  ++ readlink -f /var/jenkins/workspace/vhost-phy-autotest
00:04:19.771  + DIR_ROOT=/var/jenkins/workspace/vhost-phy-autotest
00:04:19.771  + [[ -n /var/jenkins/workspace/vhost-phy-autotest ]]
00:04:19.771  + DIR_SPDK=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:04:19.771  + DIR_OUTPUT=/var/jenkins/workspace/vhost-phy-autotest/output
00:04:19.771  + [[ -d /var/jenkins/workspace/vhost-phy-autotest/spdk ]]
00:04:19.771  + [[ ! -d /var/jenkins/workspace/vhost-phy-autotest/output ]]
00:04:19.771  + mkdir -p /var/jenkins/workspace/vhost-phy-autotest/output
00:04:19.771  + [[ -d /var/jenkins/workspace/vhost-phy-autotest/output ]]
00:04:19.771  + [[ vhost-phy-autotest == pkgdep-* ]]
00:04:19.771  + cd /var/jenkins/workspace/vhost-phy-autotest
00:04:19.771  + source /etc/os-release
00:04:19.771  ++ NAME='Fedora Linux'
00:04:19.771  ++ VERSION='39 (Cloud Edition)'
00:04:19.771  ++ ID=fedora
00:04:19.771  ++ VERSION_ID=39
00:04:19.771  ++ VERSION_CODENAME=
00:04:19.771  ++ PLATFORM_ID=platform:f39
00:04:19.771  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:04:19.771  ++ ANSI_COLOR='0;38;2;60;110;180'
00:04:19.771  ++ LOGO=fedora-logo-icon
00:04:19.771  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:04:19.771  ++ HOME_URL=https://fedoraproject.org/
00:04:19.771  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:04:19.771  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:04:19.771  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:04:19.771  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:04:19.771  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:04:19.771  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:04:19.771  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:04:19.771  ++ SUPPORT_END=2024-11-12
00:04:19.771  ++ VARIANT='Cloud Edition'
00:04:19.772  ++ VARIANT_ID=cloud
00:04:19.772  + uname -a
00:04:19.772  Linux spdk-wfp-29 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:04:19.772  + sudo /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh status
00:04:23.086  Hugepages
00:04:23.086  node     hugesize     free /  total
00:04:23.086  node0   1048576kB        0 /      0
00:04:23.086  node0      2048kB        0 /      0
00:04:23.086  node1   1048576kB        0 /      0
00:04:23.086  node1      2048kB        0 /      0
00:04:23.086  
00:04:23.086  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:23.086  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:04:23.086  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:04:23.086  NVMe                      0000:5e:00.0    144d   a80a   0       nvme             nvme0      nvme0n1
00:04:23.086  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:04:23.086  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:04:23.086  NVMe                      0000:af:00.0    8086   2701   1       nvme             nvme1      nvme1n1
00:04:23.086  NVMe                      0000:b0:00.0    8086   2701   1       nvme             nvme2      nvme2n1
00:04:23.086  + rm -f /tmp/spdk-ld-path
00:04:23.086  + source autorun-spdk.conf
00:04:23.086  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:04:23.086  ++ SPDK_TEST_VHOST=1
00:04:23.086  ++ SPDK_RUN_ASAN=1
00:04:23.086  ++ SPDK_RUN_UBSAN=1
00:04:23.086  ++ RUN_NIGHTLY=0
00:04:23.086  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:04:23.086  + [[ -n '' ]]
00:04:23.086  + sudo git config --global --add safe.directory /var/jenkins/workspace/vhost-phy-autotest/spdk
00:04:23.086  + for M in /var/spdk/build-*-manifest.txt
00:04:23.086  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:04:23.086  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vhost-phy-autotest/output/
00:04:23.086  + for M in /var/spdk/build-*-manifest.txt
00:04:23.086  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:04:23.086  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vhost-phy-autotest/output/
00:04:23.086  + for M in /var/spdk/build-*-manifest.txt
00:04:23.086  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:04:23.086  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vhost-phy-autotest/output/
00:04:23.086  ++ uname
00:04:23.086  + [[ Linux == \L\i\n\u\x ]]
00:04:23.086  + sudo dmesg -T
00:04:23.086  + sudo dmesg --clear
00:04:23.086  + dmesg_pid=1760046
00:04:23.086  + [[ Fedora Linux == FreeBSD ]]
00:04:23.086  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:04:23.086  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:04:23.086  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:04:23.086  + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:04:23.086  + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:04:23.086  + [[ -x /usr/src/fio-static/fio ]]
00:04:23.086  + export FIO_BIN=/usr/src/fio-static/fio
00:04:23.086  + sudo dmesg -Tw
00:04:23.086  + FIO_BIN=/usr/src/fio-static/fio
00:04:23.086  + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\h\o\s\t\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:04:23.086  + [[ ! -v VFIO_QEMU_BIN ]]
00:04:23.086  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:04:23.086  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:04:23.086  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:04:23.086  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:04:23.086  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:04:23.086  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:04:23.086  + spdk/autorun.sh /var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf
00:04:23.086    10:32:12  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:04:23.086   10:32:12  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf
00:04:23.086    10:32:12  -- vhost-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:04:23.086    10:32:12  -- vhost-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_VHOST=1
00:04:23.086    10:32:12  -- vhost-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_RUN_ASAN=1
00:04:23.086    10:32:12  -- vhost-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_RUN_UBSAN=1
00:04:23.086    10:32:12  -- vhost-phy-autotest/autorun-spdk.conf@5 -- $ RUN_NIGHTLY=0
00:04:23.086   10:32:12  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:04:23.086   10:32:12  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/vhost-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf
00:04:23.086     10:32:12  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:04:23.086    10:32:12  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:04:23.087     10:32:12  -- scripts/common.sh@15 -- $ shopt -s extglob
00:04:23.087     10:32:12  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:04:23.087     10:32:12  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:23.087     10:32:12  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:23.087      10:32:12  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:23.087      10:32:12  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:23.087      10:32:12  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:23.087      10:32:12  -- paths/export.sh@5 -- $ export PATH
00:04:23.087      10:32:12  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:23.087    10:32:12  -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/vhost-phy-autotest/spdk/../output
00:04:23.087      10:32:12  -- common/autobuild_common.sh@486 -- $ date +%s
00:04:23.087     10:32:12  -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732008732.XXXXXX
00:04:23.087    10:32:12  -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732008732.MyUbGv
00:04:23.087    10:32:12  -- common/autobuild_common.sh@488 -- $ [[ -n '' ]]
00:04:23.087    10:32:12  -- common/autobuild_common.sh@492 -- $ '[' -n '' ']'
00:04:23.087    10:32:12  -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/'
00:04:23.087    10:32:12  -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vhost-phy-autotest/spdk/xnvme --exclude /tmp'
00:04:23.087    10:32:12  -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vhost-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:04:23.087     10:32:12  -- common/autobuild_common.sh@502 -- $ get_config_params
00:04:23.087     10:32:12  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:04:23.087     10:32:12  -- common/autotest_common.sh@10 -- $ set +x
00:04:23.087    10:32:12  -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk'
00:04:23.087    10:32:12  -- common/autobuild_common.sh@504 -- $ start_monitor_resources
00:04:23.087    10:32:12  -- pm/common@17 -- $ local monitor
00:04:23.087    10:32:12  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:23.087    10:32:12  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:23.087     10:32:12  -- pm/common@21 -- $ date +%s
00:04:23.087    10:32:12  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:23.087    10:32:12  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:23.087     10:32:12  -- pm/common@21 -- $ date +%s
00:04:23.087    10:32:12  -- pm/common@21 -- $ /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008732
00:04:23.087    10:32:12  -- pm/common@25 -- $ sleep 1
00:04:23.087     10:32:12  -- pm/common@21 -- $ date +%s
00:04:23.087     10:32:12  -- pm/common@21 -- $ date +%s
00:04:23.087    10:32:12  -- pm/common@21 -- $ /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008732
00:04:23.087    10:32:12  -- pm/common@21 -- $ /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008732
00:04:23.087    10:32:12  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732008732
00:04:23.346  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008732_collect-cpu-temp.pm.log
00:04:23.346  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008732_collect-cpu-load.pm.log
00:04:23.346  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008732_collect-vmstat.pm.log
00:04:23.346  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732008732_collect-bmc-pm.bmc.pm.log
00:04:24.284    10:32:13  -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT
00:04:24.284   10:32:13  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:04:24.284   10:32:13  -- spdk/autobuild.sh@12 -- $ umask 022
00:04:24.284   10:32:13  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vhost-phy-autotest/spdk
00:04:24.284   10:32:13  -- spdk/autobuild.sh@16 -- $ date -u
00:04:24.284  Tue Nov 19 09:32:13 AM UTC 2024
00:04:24.284   10:32:13  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:04:24.284  v25.01-pre-193-ga0c128549
00:04:24.284   10:32:13  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:04:24.284   10:32:13  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:04:24.284   10:32:13  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:04:24.284   10:32:13  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:04:24.284   10:32:13  -- common/autotest_common.sh@10 -- $ set +x
00:04:24.284  ************************************
00:04:24.284  START TEST asan
00:04:24.284  ************************************
00:04:24.284   10:32:13 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:04:24.284  using asan
00:04:24.284  
00:04:24.284  real	0m0.001s
00:04:24.284  user	0m0.000s
00:04:24.284  sys	0m0.000s
00:04:24.284   10:32:13 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:24.284   10:32:13 asan -- common/autotest_common.sh@10 -- $ set +x
00:04:24.284  ************************************
00:04:24.284  END TEST asan
00:04:24.284  ************************************
00:04:24.284   10:32:13  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:04:24.284   10:32:13  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:04:24.284   10:32:13  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:04:24.284   10:32:13  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:04:24.284   10:32:13  -- common/autotest_common.sh@10 -- $ set +x
00:04:24.284  ************************************
00:04:24.284  START TEST ubsan
00:04:24.284  ************************************
00:04:24.284   10:32:14 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:04:24.284  using ubsan
00:04:24.284  
00:04:24.284  real	0m0.000s
00:04:24.284  user	0m0.000s
00:04:24.284  sys	0m0.000s
00:04:24.284   10:32:14 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:24.284   10:32:14 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:04:24.284  ************************************
00:04:24.284  END TEST ubsan
00:04:24.284  ************************************
00:04:24.284   10:32:14  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:04:24.284   10:32:14  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:04:24.284   10:32:14  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:04:24.284   10:32:14  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:04:24.284   10:32:14  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:04:24.284   10:32:14  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:04:24.284   10:32:14  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:04:24.284   10:32:14  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:04:24.284   10:32:14  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vhost-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared
00:04:24.544  Using default SPDK env in /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk
00:04:24.544  Using default DPDK in /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build
00:04:24.803  Using 'verbs' RDMA provider
00:04:37.949  Configuring ISA-L (logfile: /var/jenkins/workspace/vhost-phy-autotest/spdk/.spdk-isal.log)...done.
00:04:52.833  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vhost-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:04:52.833  Creating mk/config.mk...done.
00:04:52.833  Creating mk/cc.flags.mk...done.
00:04:52.833  Type 'make' to build.
00:04:52.833   10:32:41  -- spdk/autobuild.sh@70 -- $ run_test make make -j72
00:04:52.833   10:32:41  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:04:52.833   10:32:41  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:04:52.833   10:32:41  -- common/autotest_common.sh@10 -- $ set +x
00:04:52.833  ************************************
00:04:52.833  START TEST make
00:04:52.833  ************************************
00:04:52.833   10:32:41 make -- common/autotest_common.sh@1129 -- $ make -j72
00:04:52.833  make[1]: Nothing to be done for 'all'.
00:05:00.957  The Meson build system
00:05:00.957  Version: 1.5.0
00:05:00.957  Source dir: /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk
00:05:00.957  Build dir: /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build-tmp
00:05:00.957  Build type: native build
00:05:00.957  Program cat found: YES (/usr/bin/cat)
00:05:00.957  Project name: DPDK
00:05:00.957  Project version: 24.03.0
00:05:00.957  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:05:00.957  C linker for the host machine: cc ld.bfd 2.40-14
00:05:00.957  Host machine cpu family: x86_64
00:05:00.957  Host machine cpu: x86_64
00:05:00.957  Message: ## Building in Developer Mode ##
00:05:00.957  Program pkg-config found: YES (/usr/bin/pkg-config)
00:05:00.957  Program check-symbols.sh found: YES (/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:05:00.957  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:05:00.957  Program python3 found: YES (/usr/bin/python3)
00:05:00.957  Program cat found: YES (/usr/bin/cat)
00:05:00.957  Compiler for C supports arguments -march=native: YES 
00:05:00.957  Checking for size of "void *" : 8 
00:05:00.957  Checking for size of "void *" : 8 (cached)
00:05:00.957  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:05:00.957  Library m found: YES
00:05:00.957  Library numa found: YES
00:05:00.957  Has header "numaif.h" : YES 
00:05:00.957  Library fdt found: NO
00:05:00.957  Library execinfo found: NO
00:05:00.957  Has header "execinfo.h" : YES 
00:05:00.957  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:05:00.957  Run-time dependency libarchive found: NO (tried pkgconfig)
00:05:00.957  Run-time dependency libbsd found: NO (tried pkgconfig)
00:05:00.957  Run-time dependency jansson found: NO (tried pkgconfig)
00:05:00.957  Run-time dependency openssl found: YES 3.1.1
00:05:00.957  Run-time dependency libpcap found: YES 1.10.4
00:05:00.957  Has header "pcap.h" with dependency libpcap: YES 
00:05:00.957  Compiler for C supports arguments -Wcast-qual: YES 
00:05:00.957  Compiler for C supports arguments -Wdeprecated: YES 
00:05:00.957  Compiler for C supports arguments -Wformat: YES 
00:05:00.957  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:05:00.957  Compiler for C supports arguments -Wformat-security: NO 
00:05:00.957  Compiler for C supports arguments -Wmissing-declarations: YES 
00:05:00.957  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:05:00.957  Compiler for C supports arguments -Wnested-externs: YES 
00:05:00.957  Compiler for C supports arguments -Wold-style-definition: YES 
00:05:00.957  Compiler for C supports arguments -Wpointer-arith: YES 
00:05:00.957  Compiler for C supports arguments -Wsign-compare: YES 
00:05:00.957  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:05:00.957  Compiler for C supports arguments -Wundef: YES 
00:05:00.957  Compiler for C supports arguments -Wwrite-strings: YES 
00:05:00.957  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:05:00.957  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:05:00.957  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:05:00.957  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:05:00.957  Program objdump found: YES (/usr/bin/objdump)
00:05:00.957  Compiler for C supports arguments -mavx512f: YES 
00:05:00.957  Checking if "AVX512 checking" compiles: YES 
00:05:00.957  Fetching value of define "__SSE4_2__" : 1 
00:05:00.957  Fetching value of define "__AES__" : 1 
00:05:00.957  Fetching value of define "__AVX__" : 1 
00:05:00.957  Fetching value of define "__AVX2__" : 1 
00:05:00.957  Fetching value of define "__AVX512BW__" : 1 
00:05:00.957  Fetching value of define "__AVX512CD__" : 1 
00:05:00.957  Fetching value of define "__AVX512DQ__" : 1 
00:05:00.957  Fetching value of define "__AVX512F__" : 1 
00:05:00.957  Fetching value of define "__AVX512VL__" : 1 
00:05:00.957  Fetching value of define "__PCLMUL__" : 1 
00:05:00.957  Fetching value of define "__RDRND__" : 1 
00:05:00.957  Fetching value of define "__RDSEED__" : 1 
00:05:00.957  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:05:00.957  Fetching value of define "__znver1__" : (undefined) 
00:05:00.957  Fetching value of define "__znver2__" : (undefined) 
00:05:00.957  Fetching value of define "__znver3__" : (undefined) 
00:05:00.957  Fetching value of define "__znver4__" : (undefined) 
00:05:00.957  Library asan found: YES
00:05:00.957  Compiler for C supports arguments -Wno-format-truncation: YES 
00:05:00.957  Message: lib/log: Defining dependency "log"
00:05:00.957  Message: lib/kvargs: Defining dependency "kvargs"
00:05:00.957  Message: lib/telemetry: Defining dependency "telemetry"
00:05:00.957  Library rt found: YES
00:05:00.957  Checking for function "getentropy" : NO 
00:05:00.957  Message: lib/eal: Defining dependency "eal"
00:05:00.957  Message: lib/ring: Defining dependency "ring"
00:05:00.957  Message: lib/rcu: Defining dependency "rcu"
00:05:00.957  Message: lib/mempool: Defining dependency "mempool"
00:05:00.957  Message: lib/mbuf: Defining dependency "mbuf"
00:05:00.957  Fetching value of define "__PCLMUL__" : 1 (cached)
00:05:00.957  Fetching value of define "__AVX512F__" : 1 (cached)
00:05:00.957  Fetching value of define "__AVX512BW__" : 1 (cached)
00:05:00.957  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:05:00.957  Fetching value of define "__AVX512VL__" : 1 (cached)
00:05:00.957  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:05:00.957  Compiler for C supports arguments -mpclmul: YES 
00:05:00.957  Compiler for C supports arguments -maes: YES 
00:05:00.957  Compiler for C supports arguments -mavx512f: YES (cached)
00:05:00.957  Compiler for C supports arguments -mavx512bw: YES 
00:05:00.957  Compiler for C supports arguments -mavx512dq: YES 
00:05:00.957  Compiler for C supports arguments -mavx512vl: YES 
00:05:00.957  Compiler for C supports arguments -mvpclmulqdq: YES 
00:05:00.957  Compiler for C supports arguments -mavx2: YES 
00:05:00.957  Compiler for C supports arguments -mavx: YES 
00:05:00.957  Message: lib/net: Defining dependency "net"
00:05:00.957  Message: lib/meter: Defining dependency "meter"
00:05:00.957  Message: lib/ethdev: Defining dependency "ethdev"
00:05:00.957  Message: lib/pci: Defining dependency "pci"
00:05:00.957  Message: lib/cmdline: Defining dependency "cmdline"
00:05:00.958  Message: lib/hash: Defining dependency "hash"
00:05:00.958  Message: lib/timer: Defining dependency "timer"
00:05:00.958  Message: lib/compressdev: Defining dependency "compressdev"
00:05:00.958  Message: lib/cryptodev: Defining dependency "cryptodev"
00:05:00.958  Message: lib/dmadev: Defining dependency "dmadev"
00:05:00.958  Compiler for C supports arguments -Wno-cast-qual: YES 
00:05:00.958  Message: lib/power: Defining dependency "power"
00:05:00.958  Message: lib/reorder: Defining dependency "reorder"
00:05:00.958  Message: lib/security: Defining dependency "security"
00:05:00.958  Has header "linux/userfaultfd.h" : YES 
00:05:00.958  Has header "linux/vduse.h" : YES 
00:05:00.958  Message: lib/vhost: Defining dependency "vhost"
00:05:00.958  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:05:00.958  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:05:00.958  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:05:00.958  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:05:00.958  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:05:00.958  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:05:00.958  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:05:00.958  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:05:00.958  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:05:00.958  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:05:00.958  Program doxygen found: YES (/usr/local/bin/doxygen)
00:05:00.958  Configuring doxy-api-html.conf using configuration
00:05:00.958  Configuring doxy-api-man.conf using configuration
00:05:00.958  Program mandb found: YES (/usr/bin/mandb)
00:05:00.958  Program sphinx-build found: NO
00:05:00.958  Configuring rte_build_config.h using configuration
00:05:00.958  Message: 
00:05:00.958  =================
00:05:00.958  Applications Enabled
00:05:00.958  =================
00:05:00.958  
00:05:00.958  apps:
00:05:00.958  	
00:05:00.958  
00:05:00.958  Message: 
00:05:00.958  =================
00:05:00.958  Libraries Enabled
00:05:00.958  =================
00:05:00.958  
00:05:00.958  libs:
00:05:00.958  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:05:00.958  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:05:00.958  	cryptodev, dmadev, power, reorder, security, vhost, 
00:05:00.958  
00:05:00.958  Message: 
00:05:00.958  ===============
00:05:00.958  Drivers Enabled
00:05:00.958  ===============
00:05:00.958  
00:05:00.958  common:
00:05:00.958  	
00:05:00.958  bus:
00:05:00.958  	pci, vdev, 
00:05:00.958  mempool:
00:05:00.958  	ring, 
00:05:00.958  dma:
00:05:00.958  	
00:05:00.958  net:
00:05:00.958  	
00:05:00.958  crypto:
00:05:00.958  	
00:05:00.958  compress:
00:05:00.958  	
00:05:00.958  vdpa:
00:05:00.958  	
00:05:00.958  
00:05:00.958  Message: 
00:05:00.958  =================
00:05:00.958  Content Skipped
00:05:00.958  =================
00:05:00.958  
00:05:00.958  apps:
00:05:00.958  	dumpcap:	explicitly disabled via build config
00:05:00.958  	graph:	explicitly disabled via build config
00:05:00.958  	pdump:	explicitly disabled via build config
00:05:00.958  	proc-info:	explicitly disabled via build config
00:05:00.958  	test-acl:	explicitly disabled via build config
00:05:00.958  	test-bbdev:	explicitly disabled via build config
00:05:00.958  	test-cmdline:	explicitly disabled via build config
00:05:00.958  	test-compress-perf:	explicitly disabled via build config
00:05:00.958  	test-crypto-perf:	explicitly disabled via build config
00:05:00.958  	test-dma-perf:	explicitly disabled via build config
00:05:00.958  	test-eventdev:	explicitly disabled via build config
00:05:00.958  	test-fib:	explicitly disabled via build config
00:05:00.958  	test-flow-perf:	explicitly disabled via build config
00:05:00.958  	test-gpudev:	explicitly disabled via build config
00:05:00.958  	test-mldev:	explicitly disabled via build config
00:05:00.958  	test-pipeline:	explicitly disabled via build config
00:05:00.958  	test-pmd:	explicitly disabled via build config
00:05:00.958  	test-regex:	explicitly disabled via build config
00:05:00.958  	test-sad:	explicitly disabled via build config
00:05:00.958  	test-security-perf:	explicitly disabled via build config
00:05:00.958  	
00:05:00.958  libs:
00:05:00.958  	argparse:	explicitly disabled via build config
00:05:00.958  	metrics:	explicitly disabled via build config
00:05:00.958  	acl:	explicitly disabled via build config
00:05:00.958  	bbdev:	explicitly disabled via build config
00:05:00.958  	bitratestats:	explicitly disabled via build config
00:05:00.958  	bpf:	explicitly disabled via build config
00:05:00.958  	cfgfile:	explicitly disabled via build config
00:05:00.958  	distributor:	explicitly disabled via build config
00:05:00.958  	efd:	explicitly disabled via build config
00:05:00.958  	eventdev:	explicitly disabled via build config
00:05:00.958  	dispatcher:	explicitly disabled via build config
00:05:00.958  	gpudev:	explicitly disabled via build config
00:05:00.958  	gro:	explicitly disabled via build config
00:05:00.958  	gso:	explicitly disabled via build config
00:05:00.958  	ip_frag:	explicitly disabled via build config
00:05:00.958  	jobstats:	explicitly disabled via build config
00:05:00.958  	latencystats:	explicitly disabled via build config
00:05:00.958  	lpm:	explicitly disabled via build config
00:05:00.958  	member:	explicitly disabled via build config
00:05:00.958  	pcapng:	explicitly disabled via build config
00:05:00.958  	rawdev:	explicitly disabled via build config
00:05:00.958  	regexdev:	explicitly disabled via build config
00:05:00.958  	mldev:	explicitly disabled via build config
00:05:00.958  	rib:	explicitly disabled via build config
00:05:00.958  	sched:	explicitly disabled via build config
00:05:00.958  	stack:	explicitly disabled via build config
00:05:00.958  	ipsec:	explicitly disabled via build config
00:05:00.958  	pdcp:	explicitly disabled via build config
00:05:00.958  	fib:	explicitly disabled via build config
00:05:00.958  	port:	explicitly disabled via build config
00:05:00.958  	pdump:	explicitly disabled via build config
00:05:00.958  	table:	explicitly disabled via build config
00:05:00.958  	pipeline:	explicitly disabled via build config
00:05:00.958  	graph:	explicitly disabled via build config
00:05:00.958  	node:	explicitly disabled via build config
00:05:00.958  	
00:05:00.958  drivers:
00:05:00.958  	common/cpt:	not in enabled drivers build config
00:05:00.958  	common/dpaax:	not in enabled drivers build config
00:05:00.958  	common/iavf:	not in enabled drivers build config
00:05:00.958  	common/idpf:	not in enabled drivers build config
00:05:00.958  	common/ionic:	not in enabled drivers build config
00:05:00.958  	common/mvep:	not in enabled drivers build config
00:05:00.958  	common/octeontx:	not in enabled drivers build config
00:05:00.958  	bus/auxiliary:	not in enabled drivers build config
00:05:00.958  	bus/cdx:	not in enabled drivers build config
00:05:00.958  	bus/dpaa:	not in enabled drivers build config
00:05:00.958  	bus/fslmc:	not in enabled drivers build config
00:05:00.958  	bus/ifpga:	not in enabled drivers build config
00:05:00.958  	bus/platform:	not in enabled drivers build config
00:05:00.958  	bus/uacce:	not in enabled drivers build config
00:05:00.958  	bus/vmbus:	not in enabled drivers build config
00:05:00.958  	common/cnxk:	not in enabled drivers build config
00:05:00.958  	common/mlx5:	not in enabled drivers build config
00:05:00.958  	common/nfp:	not in enabled drivers build config
00:05:00.958  	common/nitrox:	not in enabled drivers build config
00:05:00.958  	common/qat:	not in enabled drivers build config
00:05:00.958  	common/sfc_efx:	not in enabled drivers build config
00:05:00.958  	mempool/bucket:	not in enabled drivers build config
00:05:00.958  	mempool/cnxk:	not in enabled drivers build config
00:05:00.958  	mempool/dpaa:	not in enabled drivers build config
00:05:00.958  	mempool/dpaa2:	not in enabled drivers build config
00:05:00.958  	mempool/octeontx:	not in enabled drivers build config
00:05:00.958  	mempool/stack:	not in enabled drivers build config
00:05:00.958  	dma/cnxk:	not in enabled drivers build config
00:05:00.958  	dma/dpaa:	not in enabled drivers build config
00:05:00.958  	dma/dpaa2:	not in enabled drivers build config
00:05:00.958  	dma/hisilicon:	not in enabled drivers build config
00:05:00.958  	dma/idxd:	not in enabled drivers build config
00:05:00.958  	dma/ioat:	not in enabled drivers build config
00:05:00.958  	dma/skeleton:	not in enabled drivers build config
00:05:00.958  	net/af_packet:	not in enabled drivers build config
00:05:00.958  	net/af_xdp:	not in enabled drivers build config
00:05:00.958  	net/ark:	not in enabled drivers build config
00:05:00.958  	net/atlantic:	not in enabled drivers build config
00:05:00.958  	net/avp:	not in enabled drivers build config
00:05:00.958  	net/axgbe:	not in enabled drivers build config
00:05:00.958  	net/bnx2x:	not in enabled drivers build config
00:05:00.958  	net/bnxt:	not in enabled drivers build config
00:05:00.958  	net/bonding:	not in enabled drivers build config
00:05:00.958  	net/cnxk:	not in enabled drivers build config
00:05:00.958  	net/cpfl:	not in enabled drivers build config
00:05:00.958  	net/cxgbe:	not in enabled drivers build config
00:05:00.958  	net/dpaa:	not in enabled drivers build config
00:05:00.958  	net/dpaa2:	not in enabled drivers build config
00:05:00.958  	net/e1000:	not in enabled drivers build config
00:05:00.958  	net/ena:	not in enabled drivers build config
00:05:00.958  	net/enetc:	not in enabled drivers build config
00:05:00.958  	net/enetfec:	not in enabled drivers build config
00:05:00.958  	net/enic:	not in enabled drivers build config
00:05:00.959  	net/failsafe:	not in enabled drivers build config
00:05:00.959  	net/fm10k:	not in enabled drivers build config
00:05:00.959  	net/gve:	not in enabled drivers build config
00:05:00.959  	net/hinic:	not in enabled drivers build config
00:05:00.959  	net/hns3:	not in enabled drivers build config
00:05:00.959  	net/i40e:	not in enabled drivers build config
00:05:00.959  	net/iavf:	not in enabled drivers build config
00:05:00.959  	net/ice:	not in enabled drivers build config
00:05:00.959  	net/idpf:	not in enabled drivers build config
00:05:00.959  	net/igc:	not in enabled drivers build config
00:05:00.959  	net/ionic:	not in enabled drivers build config
00:05:00.959  	net/ipn3ke:	not in enabled drivers build config
00:05:00.959  	net/ixgbe:	not in enabled drivers build config
00:05:00.959  	net/mana:	not in enabled drivers build config
00:05:00.959  	net/memif:	not in enabled drivers build config
00:05:00.959  	net/mlx4:	not in enabled drivers build config
00:05:00.959  	net/mlx5:	not in enabled drivers build config
00:05:00.959  	net/mvneta:	not in enabled drivers build config
00:05:00.959  	net/mvpp2:	not in enabled drivers build config
00:05:00.959  	net/netvsc:	not in enabled drivers build config
00:05:00.959  	net/nfb:	not in enabled drivers build config
00:05:00.959  	net/nfp:	not in enabled drivers build config
00:05:00.959  	net/ngbe:	not in enabled drivers build config
00:05:00.959  	net/null:	not in enabled drivers build config
00:05:00.959  	net/octeontx:	not in enabled drivers build config
00:05:00.959  	net/octeon_ep:	not in enabled drivers build config
00:05:00.959  	net/pcap:	not in enabled drivers build config
00:05:00.959  	net/pfe:	not in enabled drivers build config
00:05:00.959  	net/qede:	not in enabled drivers build config
00:05:00.959  	net/ring:	not in enabled drivers build config
00:05:00.959  	net/sfc:	not in enabled drivers build config
00:05:00.959  	net/softnic:	not in enabled drivers build config
00:05:00.959  	net/tap:	not in enabled drivers build config
00:05:00.959  	net/thunderx:	not in enabled drivers build config
00:05:00.959  	net/txgbe:	not in enabled drivers build config
00:05:00.959  	net/vdev_netvsc:	not in enabled drivers build config
00:05:00.959  	net/vhost:	not in enabled drivers build config
00:05:00.959  	net/virtio:	not in enabled drivers build config
00:05:00.959  	net/vmxnet3:	not in enabled drivers build config
00:05:00.959  	raw/*:	missing internal dependency, "rawdev"
00:05:00.959  	crypto/armv8:	not in enabled drivers build config
00:05:00.959  	crypto/bcmfs:	not in enabled drivers build config
00:05:00.959  	crypto/caam_jr:	not in enabled drivers build config
00:05:00.959  	crypto/ccp:	not in enabled drivers build config
00:05:00.959  	crypto/cnxk:	not in enabled drivers build config
00:05:00.959  	crypto/dpaa_sec:	not in enabled drivers build config
00:05:00.959  	crypto/dpaa2_sec:	not in enabled drivers build config
00:05:00.959  	crypto/ipsec_mb:	not in enabled drivers build config
00:05:00.959  	crypto/mlx5:	not in enabled drivers build config
00:05:00.959  	crypto/mvsam:	not in enabled drivers build config
00:05:00.959  	crypto/nitrox:	not in enabled drivers build config
00:05:00.959  	crypto/null:	not in enabled drivers build config
00:05:00.959  	crypto/octeontx:	not in enabled drivers build config
00:05:00.959  	crypto/openssl:	not in enabled drivers build config
00:05:00.959  	crypto/scheduler:	not in enabled drivers build config
00:05:00.959  	crypto/uadk:	not in enabled drivers build config
00:05:00.959  	crypto/virtio:	not in enabled drivers build config
00:05:00.959  	compress/isal:	not in enabled drivers build config
00:05:00.959  	compress/mlx5:	not in enabled drivers build config
00:05:00.959  	compress/nitrox:	not in enabled drivers build config
00:05:00.959  	compress/octeontx:	not in enabled drivers build config
00:05:00.959  	compress/zlib:	not in enabled drivers build config
00:05:00.959  	regex/*:	missing internal dependency, "regexdev"
00:05:00.959  	ml/*:	missing internal dependency, "mldev"
00:05:00.959  	vdpa/ifc:	not in enabled drivers build config
00:05:00.959  	vdpa/mlx5:	not in enabled drivers build config
00:05:00.959  	vdpa/nfp:	not in enabled drivers build config
00:05:00.959  	vdpa/sfc:	not in enabled drivers build config
00:05:00.959  	event/*:	missing internal dependency, "eventdev"
00:05:00.959  	baseband/*:	missing internal dependency, "bbdev"
00:05:00.959  	gpu/*:	missing internal dependency, "gpudev"
00:05:00.959  	
00:05:00.959  
00:05:01.218  Build targets in project: 85
00:05:01.218  
00:05:01.218  DPDK 24.03.0
00:05:01.218  
00:05:01.218    User defined options
00:05:01.218      buildtype          : debug
00:05:01.218      default_library    : shared
00:05:01.218      libdir             : lib
00:05:01.218      prefix             : /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build
00:05:01.218      b_sanitize         : address
00:05:01.218      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:05:01.218      c_link_args        : 
00:05:01.218      cpu_instruction_set: native
00:05:01.218      disable_apps       : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf
00:05:01.218      disable_libs       : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro
00:05:01.218      enable_docs        : false
00:05:01.218      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring
00:05:01.218      enable_kmods       : false
00:05:01.218      max_lcores         : 128
00:05:01.218      tests              : false
00:05:01.218  
00:05:01.218  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:05:01.832  ninja: Entering directory `/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build-tmp'
00:05:01.832  [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:05:01.832  [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:05:01.832  [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:05:01.832  [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:05:01.832  [5/268] Linking static target lib/librte_kvargs.a
00:05:01.832  [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:05:01.832  [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:05:01.832  [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:05:01.832  [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:05:01.832  [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:05:01.832  [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:05:01.832  [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:05:01.832  [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:05:01.832  [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:05:01.832  [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:05:01.832  [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:05:01.832  [17/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:05:01.832  [18/268] Linking static target lib/librte_log.a
00:05:01.832  [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:05:02.094  [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:05:02.357  [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:05:02.357  [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:05:02.357  [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:05:02.357  [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:05:02.357  [25/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:05:02.357  [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:05:02.357  [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:05:02.357  [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:05:02.357  [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:05:02.357  [30/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:05:02.357  [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:05:02.357  [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:05:02.357  [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:05:02.357  [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:05:02.357  [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:05:02.357  [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:05:02.357  [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:05:02.357  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:05:02.357  [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:05:02.357  [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:05:02.357  [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:05:02.357  [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:05:02.357  [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:05:02.357  [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:05:02.357  [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:05:02.357  [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:05:02.357  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:05:02.357  [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:05:02.357  [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:05:02.357  [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:05:02.357  [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:05:02.357  [52/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:05:02.357  [53/268] Linking static target lib/librte_ring.a
00:05:02.357  [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:05:02.357  [55/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:05:02.357  [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:05:02.357  [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:05:02.357  [58/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:05:02.357  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:05:02.357  [60/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:05:02.357  [61/268] Linking static target lib/librte_telemetry.a
00:05:02.357  [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:05:02.357  [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:05:02.357  [64/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:05:02.357  [65/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:05:02.357  [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:05:02.357  [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:05:02.357  [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:05:02.357  [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:05:02.357  [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:05:02.357  [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:05:02.357  [72/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:05:02.357  [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:05:02.357  [74/268] Linking static target lib/librte_pci.a
00:05:02.357  [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:05:02.357  [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:05:02.357  [77/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:05:02.357  [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:05:02.357  [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:05:02.620  [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:05:02.620  [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:05:02.620  [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:05:02.620  [83/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:05:02.620  [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:05:02.620  [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:05:02.620  [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:05:02.620  [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:05:02.620  [88/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:05:02.620  [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:05:02.620  [90/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:05:02.620  [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:05:02.620  [92/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:05:02.620  [93/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:05:02.620  [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:05:02.620  [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:05:02.620  [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:05:02.620  [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:05:02.620  [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:05:02.620  [99/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:05:02.620  [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:05:02.620  [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:05:02.620  [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:05:02.620  [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:05:02.620  [104/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:05:02.620  [105/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:05:02.621  [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:05:02.879  [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:05:02.879  [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:05:02.879  [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:05:02.879  [110/268] Linking static target lib/librte_mempool.a
00:05:02.879  [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:05:02.879  [112/268] Linking static target lib/librte_meter.a
00:05:02.879  [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:05:02.879  [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:05:02.879  [115/268] Linking static target lib/librte_net.a
00:05:02.879  [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:05:02.879  [117/268] Linking static target lib/librte_eal.a
00:05:02.879  [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:05:02.879  [119/268] Linking static target lib/librte_rcu.a
00:05:02.879  [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:05:02.879  [121/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:05:02.879  [122/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:05:02.879  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:05:02.879  [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:05:02.879  [125/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:05:02.879  [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:05:02.879  [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:05:02.879  [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:05:02.879  [129/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:05:02.879  [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:05:02.879  [131/268] Linking target lib/librte_log.so.24.1
00:05:02.879  [132/268] Linking static target lib/librte_cmdline.a
00:05:02.879  [133/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:05:02.879  [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:05:02.879  [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:05:02.880  [136/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:05:02.880  [137/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:05:02.880  [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:05:03.138  [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:05:03.138  [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:05:03.138  [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:05:03.138  [142/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:05:03.138  [143/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:05:03.138  [144/268] Linking static target lib/librte_timer.a
00:05:03.138  [145/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:05:03.138  [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:05:03.138  [147/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.138  [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:05:03.138  [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:05:03.138  [150/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.138  [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:05:03.138  [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:05:03.138  [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:05:03.138  [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:05:03.138  [155/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:05:03.138  [156/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:05:03.138  [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.138  [158/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:05:03.138  [159/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:05:03.138  [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:05:03.138  [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:05:03.138  [162/268] Linking target lib/librte_telemetry.so.24.1
00:05:03.138  [163/268] Linking target lib/librte_kvargs.so.24.1
00:05:03.139  [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:05:03.139  [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:05:03.139  [166/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.139  [167/268] Linking static target lib/librte_power.a
00:05:03.139  [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:05:03.139  [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:05:03.139  [170/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:05:03.139  [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:05:03.139  [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:05:03.139  [173/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:05:03.139  [174/268] Linking static target lib/librte_compressdev.a
00:05:03.139  [175/268] Linking static target lib/librte_dmadev.a
00:05:03.139  [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:05:03.139  [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:05:03.139  [178/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:05:03.139  [179/268] Linking static target lib/librte_reorder.a
00:05:03.398  [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:05:03.398  [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:05:03.398  [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:05:03.398  [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:05:03.398  [184/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:05:03.398  [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:05:03.398  [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:05:03.398  [187/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:05:03.398  [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:05:03.398  [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:05:03.398  [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:05:03.398  [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:05:03.398  [192/268] Linking static target lib/librte_security.a
00:05:03.398  [193/268] Linking static target drivers/librte_bus_vdev.a
00:05:03.398  [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:05:03.398  [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:05:03.398  [196/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.657  [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.657  [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:05:03.657  [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:05:03.657  [200/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:05:03.657  [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:05:03.657  [202/268] Linking static target lib/librte_mbuf.a
00:05:03.657  [203/268] Linking static target drivers/librte_bus_pci.a
00:05:03.657  [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:05:03.657  [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:05:03.657  [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:05:03.657  [207/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:05:03.657  [208/268] Linking static target lib/librte_hash.a
00:05:03.657  [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:05:03.657  [210/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.657  [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:05:03.657  [212/268] Linking static target drivers/librte_mempool_ring.a
00:05:03.915  [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.915  [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.915  [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:05:03.915  [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:05:04.174  [217/268] Linking static target lib/librte_cryptodev.a
00:05:04.174  [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:05:04.174  [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:05:04.174  [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:05:04.174  [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:05:04.432  [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:05:04.432  [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:05:04.690  [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:05:04.691  [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:05:04.691  [226/268] Linking static target lib/librte_ethdev.a
00:05:05.626  [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:05:06.191  [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:05:09.475  [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:05:09.475  [230/268] Linking static target lib/librte_vhost.a
00:05:11.374  [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:05:13.906  [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:05:14.841  [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:05:15.100  [234/268] Linking target lib/librte_eal.so.24.1
00:05:15.100  [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:05:15.100  [236/268] Linking target lib/librte_pci.so.24.1
00:05:15.100  [237/268] Linking target lib/librte_timer.so.24.1
00:05:15.100  [238/268] Linking target lib/librte_meter.so.24.1
00:05:15.100  [239/268] Linking target lib/librte_ring.so.24.1
00:05:15.100  [240/268] Linking target drivers/librte_bus_vdev.so.24.1
00:05:15.100  [241/268] Linking target lib/librte_dmadev.so.24.1
00:05:15.359  [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:05:15.359  [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:05:15.359  [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:05:15.359  [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:05:15.359  [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:05:15.359  [247/268] Linking target lib/librte_mempool.so.24.1
00:05:15.359  [248/268] Linking target lib/librte_rcu.so.24.1
00:05:15.359  [249/268] Linking target drivers/librte_bus_pci.so.24.1
00:05:15.359  [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:05:15.620  [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:05:15.620  [252/268] Linking target drivers/librte_mempool_ring.so.24.1
00:05:15.620  [253/268] Linking target lib/librte_mbuf.so.24.1
00:05:15.620  [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:05:15.620  [255/268] Linking target lib/librte_compressdev.so.24.1
00:05:15.620  [256/268] Linking target lib/librte_net.so.24.1
00:05:15.620  [257/268] Linking target lib/librte_cryptodev.so.24.1
00:05:15.620  [258/268] Linking target lib/librte_reorder.so.24.1
00:05:15.917  [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:05:15.917  [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:05:15.917  [261/268] Linking target lib/librte_hash.so.24.1
00:05:15.917  [262/268] Linking target lib/librte_cmdline.so.24.1
00:05:15.917  [263/268] Linking target lib/librte_security.so.24.1
00:05:15.917  [264/268] Linking target lib/librte_ethdev.so.24.1
00:05:15.917  [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:05:16.203  [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:05:16.203  [267/268] Linking target lib/librte_power.so.24.1
00:05:16.203  [268/268] Linking target lib/librte_vhost.so.24.1
00:05:16.203  INFO: autodetecting backend as ninja
00:05:16.203  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build-tmp -j 72
00:05:26.177    CC lib/ut_mock/mock.o
00:05:26.177    CC lib/log/log.o
00:05:26.177    CC lib/log/log_flags.o
00:05:26.177    CC lib/log/log_deprecated.o
00:05:26.177    CC lib/ut/ut.o
00:05:26.177    LIB libspdk_ut_mock.a
00:05:26.177    SO libspdk_ut_mock.so.6.0
00:05:26.177    LIB libspdk_log.a
00:05:26.177    SO libspdk_log.so.7.1
00:05:26.177    SYMLINK libspdk_ut_mock.so
00:05:26.177    LIB libspdk_ut.a
00:05:26.177    SO libspdk_ut.so.2.0
00:05:26.177    SYMLINK libspdk_log.so
00:05:26.177    SYMLINK libspdk_ut.so
00:05:26.177    CXX lib/trace_parser/trace.o
00:05:26.177    CC lib/dma/dma.o
00:05:26.177    CC lib/util/bit_array.o
00:05:26.177    CC lib/util/base64.o
00:05:26.177    CC lib/util/crc16.o
00:05:26.177    CC lib/util/crc32.o
00:05:26.177    CC lib/util/cpuset.o
00:05:26.177    CC lib/util/crc32c.o
00:05:26.177    CC lib/util/crc32_ieee.o
00:05:26.177    CC lib/util/crc64.o
00:05:26.177    CC lib/util/dif.o
00:05:26.177    CC lib/util/fd.o
00:05:26.177    CC lib/util/fd_group.o
00:05:26.177    CC lib/util/iov.o
00:05:26.177    CC lib/util/file.o
00:05:26.177    CC lib/util/hexlify.o
00:05:26.177    CC lib/util/math.o
00:05:26.177    CC lib/util/net.o
00:05:26.177    CC lib/util/string.o
00:05:26.177    CC lib/util/strerror_tls.o
00:05:26.177    CC lib/util/pipe.o
00:05:26.177    CC lib/util/zipf.o
00:05:26.177    CC lib/ioat/ioat.o
00:05:26.177    CC lib/util/uuid.o
00:05:26.177    CC lib/util/xor.o
00:05:26.177    CC lib/util/md5.o
00:05:26.177    LIB libspdk_dma.a
00:05:26.177    CC lib/vfio_user/host/vfio_user.o
00:05:26.177    CC lib/vfio_user/host/vfio_user_pci.o
00:05:26.177    SO libspdk_dma.so.5.0
00:05:26.177    SYMLINK libspdk_dma.so
00:05:26.177    LIB libspdk_ioat.a
00:05:26.177    SO libspdk_ioat.so.7.0
00:05:26.177    SYMLINK libspdk_ioat.so
00:05:26.178    LIB libspdk_vfio_user.a
00:05:26.178    SO libspdk_vfio_user.so.5.0
00:05:26.178    SYMLINK libspdk_vfio_user.so
00:05:26.178    LIB libspdk_util.a
00:05:26.178    SO libspdk_util.so.10.1
00:05:26.178    LIB libspdk_trace_parser.a
00:05:26.178    SYMLINK libspdk_util.so
00:05:26.178    SO libspdk_trace_parser.so.6.0
00:05:26.178    SYMLINK libspdk_trace_parser.so
00:05:26.436    CC lib/env_dpdk/env.o
00:05:26.436    CC lib/env_dpdk/pci.o
00:05:26.436    CC lib/env_dpdk/pci_ioat.o
00:05:26.436    CC lib/env_dpdk/memory.o
00:05:26.436    CC lib/env_dpdk/init.o
00:05:26.436    CC lib/env_dpdk/pci_vmd.o
00:05:26.436    CC lib/env_dpdk/threads.o
00:05:26.436    CC lib/env_dpdk/pci_event.o
00:05:26.436    CC lib/env_dpdk/pci_virtio.o
00:05:26.436    CC lib/env_dpdk/pci_idxd.o
00:05:26.436    CC lib/env_dpdk/pci_dpdk.o
00:05:26.436    CC lib/env_dpdk/sigbus_handler.o
00:05:26.436    CC lib/env_dpdk/pci_dpdk_2207.o
00:05:26.436    CC lib/idxd/idxd.o
00:05:26.436    CC lib/idxd/idxd_user.o
00:05:26.436    CC lib/env_dpdk/pci_dpdk_2211.o
00:05:26.436    CC lib/idxd/idxd_kernel.o
00:05:26.436    CC lib/rdma_utils/rdma_utils.o
00:05:26.436    CC lib/json/json_write.o
00:05:26.436    CC lib/json/json_util.o
00:05:26.436    CC lib/json/json_parse.o
00:05:26.436    CC lib/conf/conf.o
00:05:26.436    CC lib/vmd/vmd.o
00:05:26.436    CC lib/vmd/led.o
00:05:26.694    LIB libspdk_conf.a
00:05:26.694    SO libspdk_conf.so.6.0
00:05:26.694    LIB libspdk_json.a
00:05:26.694    LIB libspdk_rdma_utils.a
00:05:26.694    SO libspdk_json.so.6.0
00:05:26.694    SO libspdk_rdma_utils.so.1.0
00:05:26.952    SYMLINK libspdk_conf.so
00:05:26.952    SYMLINK libspdk_json.so
00:05:26.952    SYMLINK libspdk_rdma_utils.so
00:05:27.211    LIB libspdk_idxd.a
00:05:27.211    LIB libspdk_vmd.a
00:05:27.211    SO libspdk_idxd.so.12.1
00:05:27.211    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:05:27.211    CC lib/jsonrpc/jsonrpc_server.o
00:05:27.211    CC lib/jsonrpc/jsonrpc_client.o
00:05:27.211    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:05:27.211    SO libspdk_vmd.so.6.0
00:05:27.211    CC lib/rdma_provider/common.o
00:05:27.211    CC lib/rdma_provider/rdma_provider_verbs.o
00:05:27.211    SYMLINK libspdk_vmd.so
00:05:27.211    SYMLINK libspdk_idxd.so
00:05:27.470    LIB libspdk_rdma_provider.a
00:05:27.470    LIB libspdk_jsonrpc.a
00:05:27.470    SO libspdk_rdma_provider.so.7.0
00:05:27.470    SO libspdk_jsonrpc.so.6.0
00:05:27.470    SYMLINK libspdk_rdma_provider.so
00:05:27.470    SYMLINK libspdk_jsonrpc.so
00:05:27.729    LIB libspdk_env_dpdk.a
00:05:27.729    CC lib/rpc/rpc.o
00:05:27.988    SO libspdk_env_dpdk.so.15.1
00:05:27.988    SYMLINK libspdk_env_dpdk.so
00:05:27.988    LIB libspdk_rpc.a
00:05:27.988    SO libspdk_rpc.so.6.0
00:05:28.247    SYMLINK libspdk_rpc.so
00:05:28.507    CC lib/keyring/keyring.o
00:05:28.507    CC lib/keyring/keyring_rpc.o
00:05:28.507    CC lib/notify/notify.o
00:05:28.507    CC lib/notify/notify_rpc.o
00:05:28.507    CC lib/trace/trace_flags.o
00:05:28.507    CC lib/trace/trace.o
00:05:28.507    CC lib/trace/trace_rpc.o
00:05:28.766    LIB libspdk_notify.a
00:05:28.766    LIB libspdk_keyring.a
00:05:28.766    SO libspdk_notify.so.6.0
00:05:28.766    SO libspdk_keyring.so.2.0
00:05:28.766    LIB libspdk_trace.a
00:05:28.766    SYMLINK libspdk_notify.so
00:05:28.766    SYMLINK libspdk_keyring.so
00:05:28.766    SO libspdk_trace.so.11.0
00:05:29.024    SYMLINK libspdk_trace.so
00:05:29.282    CC lib/sock/sock.o
00:05:29.282    CC lib/sock/sock_rpc.o
00:05:29.282    CC lib/thread/thread.o
00:05:29.282    CC lib/thread/iobuf.o
00:05:29.541    LIB libspdk_sock.a
00:05:29.541    SO libspdk_sock.so.10.0
00:05:29.801    SYMLINK libspdk_sock.so
00:05:30.059    CC lib/nvme/nvme_ctrlr.o
00:05:30.059    CC lib/nvme/nvme_ctrlr_cmd.o
00:05:30.059    CC lib/nvme/nvme_ns_cmd.o
00:05:30.059    CC lib/nvme/nvme_ns.o
00:05:30.059    CC lib/nvme/nvme_fabric.o
00:05:30.059    CC lib/nvme/nvme_pcie_common.o
00:05:30.059    CC lib/nvme/nvme_qpair.o
00:05:30.059    CC lib/nvme/nvme_pcie.o
00:05:30.059    CC lib/nvme/nvme_quirks.o
00:05:30.059    CC lib/nvme/nvme.o
00:05:30.059    CC lib/nvme/nvme_transport.o
00:05:30.059    CC lib/nvme/nvme_discovery.o
00:05:30.059    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:05:30.059    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:05:30.059    CC lib/nvme/nvme_tcp.o
00:05:30.059    CC lib/nvme/nvme_opal.o
00:05:30.059    CC lib/nvme/nvme_io_msg.o
00:05:30.059    CC lib/nvme/nvme_poll_group.o
00:05:30.059    CC lib/nvme/nvme_zns.o
00:05:30.059    CC lib/nvme/nvme_stubs.o
00:05:30.059    CC lib/nvme/nvme_auth.o
00:05:30.059    CC lib/nvme/nvme_cuse.o
00:05:30.059    CC lib/nvme/nvme_rdma.o
00:05:30.626    LIB libspdk_thread.a
00:05:30.885    SO libspdk_thread.so.11.0
00:05:30.885    SYMLINK libspdk_thread.so
00:05:31.144    CC lib/accel/accel.o
00:05:31.144    CC lib/accel/accel_rpc.o
00:05:31.144    CC lib/accel/accel_sw.o
00:05:31.144    CC lib/virtio/virtio.o
00:05:31.144    CC lib/virtio/virtio_pci.o
00:05:31.144    CC lib/virtio/virtio_vhost_user.o
00:05:31.144    CC lib/virtio/virtio_vfio_user.o
00:05:31.144    CC lib/fsdev/fsdev_io.o
00:05:31.144    CC lib/fsdev/fsdev.o
00:05:31.144    CC lib/fsdev/fsdev_rpc.o
00:05:31.144    CC lib/init/json_config.o
00:05:31.144    CC lib/init/subsystem.o
00:05:31.144    CC lib/init/subsystem_rpc.o
00:05:31.144    CC lib/init/rpc.o
00:05:31.144    CC lib/blob/blobstore.o
00:05:31.144    CC lib/blob/request.o
00:05:31.144    CC lib/blob/zeroes.o
00:05:31.144    CC lib/blob/blob_bs_dev.o
00:05:31.403    LIB libspdk_init.a
00:05:31.403    SO libspdk_init.so.6.0
00:05:31.662    LIB libspdk_virtio.a
00:05:31.662    SYMLINK libspdk_init.so
00:05:31.662    SO libspdk_virtio.so.7.0
00:05:31.662    SYMLINK libspdk_virtio.so
00:05:31.921    LIB libspdk_fsdev.a
00:05:31.921    SO libspdk_fsdev.so.2.0
00:05:31.921    CC lib/event/log_rpc.o
00:05:31.921    CC lib/event/app.o
00:05:31.921    CC lib/event/app_rpc.o
00:05:31.921    CC lib/event/reactor.o
00:05:31.921    CC lib/event/scheduler_static.o
00:05:31.921    SYMLINK libspdk_fsdev.so
00:05:32.179    LIB libspdk_accel.a
00:05:32.179    LIB libspdk_nvme.a
00:05:32.179    SO libspdk_accel.so.16.0
00:05:32.179    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:05:32.438    LIB libspdk_event.a
00:05:32.438    SYMLINK libspdk_accel.so
00:05:32.438    SO libspdk_event.so.14.0
00:05:32.438    SO libspdk_nvme.so.15.0
00:05:32.438    SYMLINK libspdk_event.so
00:05:32.697    SYMLINK libspdk_nvme.so
00:05:32.697    CC lib/bdev/bdev.o
00:05:32.697    CC lib/bdev/bdev_rpc.o
00:05:32.697    CC lib/bdev/bdev_zone.o
00:05:32.697    CC lib/bdev/part.o
00:05:32.697    CC lib/bdev/scsi_nvme.o
00:05:32.956    LIB libspdk_fuse_dispatcher.a
00:05:32.956    SO libspdk_fuse_dispatcher.so.1.0
00:05:32.956    SYMLINK libspdk_fuse_dispatcher.so
00:05:34.332    LIB libspdk_blob.a
00:05:34.332    SO libspdk_blob.so.11.0
00:05:34.591    SYMLINK libspdk_blob.so
00:05:34.849    CC lib/blobfs/blobfs.o
00:05:34.849    CC lib/blobfs/tree.o
00:05:34.849    CC lib/lvol/lvol.o
00:05:35.416    LIB libspdk_bdev.a
00:05:35.416    SO libspdk_bdev.so.17.0
00:05:35.416    SYMLINK libspdk_bdev.so
00:05:35.679    LIB libspdk_blobfs.a
00:05:35.679    SO libspdk_blobfs.so.10.0
00:05:35.679    CC lib/nbd/nbd.o
00:05:35.679    CC lib/nbd/nbd_rpc.o
00:05:35.679    CC lib/scsi/dev.o
00:05:35.679    CC lib/ftl/ftl_init.o
00:05:35.679    CC lib/ftl/ftl_core.o
00:05:35.679    CC lib/scsi/lun.o
00:05:35.679    CC lib/ftl/ftl_io.o
00:05:35.679    CC lib/scsi/port.o
00:05:35.679    CC lib/ftl/ftl_debug.o
00:05:35.679    LIB libspdk_lvol.a
00:05:35.679    CC lib/ftl/ftl_layout.o
00:05:35.679    CC lib/nvmf/ctrlr_bdev.o
00:05:35.679    CC lib/nvmf/ctrlr.o
00:05:35.679    CC lib/scsi/scsi.o
00:05:35.679    CC lib/nvmf/subsystem.o
00:05:35.679    CC lib/scsi/scsi_bdev.o
00:05:35.679    CC lib/nvmf/ctrlr_discovery.o
00:05:35.679    CC lib/scsi/scsi_pr.o
00:05:35.679    CC lib/ublk/ublk_rpc.o
00:05:35.679    CC lib/ftl/ftl_sb.o
00:05:35.679    CC lib/scsi/scsi_rpc.o
00:05:35.679    CC lib/ftl/ftl_l2p.o
00:05:35.679    CC lib/ublk/ublk.o
00:05:35.679    CC lib/scsi/task.o
00:05:35.679    CC lib/nvmf/nvmf.o
00:05:35.679    CC lib/ftl/ftl_l2p_flat.o
00:05:35.679    CC lib/nvmf/nvmf_rpc.o
00:05:35.679    CC lib/ftl/ftl_nv_cache.o
00:05:35.679    CC lib/nvmf/transport.o
00:05:35.679    CC lib/ftl/ftl_writer.o
00:05:35.679    CC lib/ftl/ftl_band.o
00:05:35.679    CC lib/ftl/ftl_rq.o
00:05:35.679    CC lib/ftl/ftl_band_ops.o
00:05:35.679    CC lib/nvmf/tcp.o
00:05:35.679    CC lib/ftl/ftl_reloc.o
00:05:35.679    CC lib/nvmf/rdma.o
00:05:35.679    CC lib/ftl/ftl_l2p_cache.o
00:05:35.679    CC lib/nvmf/mdns_server.o
00:05:35.679    CC lib/nvmf/stubs.o
00:05:35.679    CC lib/ftl/ftl_p2l.o
00:05:35.679    CC lib/nvmf/auth.o
00:05:35.679    CC lib/ftl/ftl_p2l_log.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_md.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_startup.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_misc.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:05:35.679    SYMLINK libspdk_blobfs.so
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_band.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:05:35.679    CC lib/ftl/utils/ftl_conf.o
00:05:35.679    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:05:35.679    CC lib/ftl/utils/ftl_mempool.o
00:05:35.679    CC lib/ftl/utils/ftl_md.o
00:05:35.679    CC lib/ftl/utils/ftl_bitmap.o
00:05:35.679    CC lib/ftl/utils/ftl_property.o
00:05:35.679    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:05:35.679    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:05:35.679    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:05:35.679    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:05:35.679    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:05:35.679    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:05:35.679    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:05:35.679    CC lib/ftl/upgrade/ftl_sb_v3.o
00:05:35.679    CC lib/ftl/upgrade/ftl_sb_v5.o
00:05:35.679    CC lib/ftl/nvc/ftl_nvc_dev.o
00:05:35.679    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:05:35.679    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:05:35.679    SO libspdk_lvol.so.10.0
00:05:35.938    SYMLINK libspdk_lvol.so
00:05:35.939    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:05:36.197    CC lib/ftl/base/ftl_base_dev.o
00:05:36.197    CC lib/ftl/base/ftl_base_bdev.o
00:05:36.197    CC lib/ftl/ftl_trace.o
00:05:36.456    LIB libspdk_nbd.a
00:05:36.456    SO libspdk_nbd.so.7.0
00:05:36.456    SYMLINK libspdk_nbd.so
00:05:36.456    LIB libspdk_scsi.a
00:05:36.715    SO libspdk_scsi.so.9.0
00:05:36.715    LIB libspdk_ublk.a
00:05:36.715    SYMLINK libspdk_scsi.so
00:05:36.715    SO libspdk_ublk.so.3.0
00:05:36.974    SYMLINK libspdk_ublk.so
00:05:36.974    LIB libspdk_ftl.a
00:05:36.974    CC lib/vhost/vhost.o
00:05:36.974    CC lib/vhost/vhost_rpc.o
00:05:36.974    CC lib/vhost/vhost_scsi.o
00:05:36.974    CC lib/vhost/vhost_blk.o
00:05:36.974    CC lib/vhost/rte_vhost_user.o
00:05:36.974    CC lib/iscsi/conn.o
00:05:36.974    CC lib/iscsi/iscsi.o
00:05:36.974    CC lib/iscsi/init_grp.o
00:05:36.974    CC lib/iscsi/param.o
00:05:36.974    CC lib/iscsi/iscsi_subsystem.o
00:05:36.974    CC lib/iscsi/portal_grp.o
00:05:36.974    CC lib/iscsi/tgt_node.o
00:05:36.974    CC lib/iscsi/task.o
00:05:36.974    CC lib/iscsi/iscsi_rpc.o
00:05:37.261    SO libspdk_ftl.so.9.0
00:05:37.521    SYMLINK libspdk_ftl.so
00:05:38.089    LIB libspdk_vhost.a
00:05:38.089    SO libspdk_vhost.so.8.0
00:05:38.089    SYMLINK libspdk_vhost.so
00:05:38.348    LIB libspdk_nvmf.a
00:05:38.348    SO libspdk_nvmf.so.20.0
00:05:38.348    LIB libspdk_iscsi.a
00:05:38.607    SO libspdk_iscsi.so.8.0
00:05:38.607    SYMLINK libspdk_nvmf.so
00:05:38.607    SYMLINK libspdk_iscsi.so
00:05:39.177    CC module/env_dpdk/env_dpdk_rpc.o
00:05:39.177    CC module/fsdev/aio/fsdev_aio.o
00:05:39.177    CC module/fsdev/aio/linux_aio_mgr.o
00:05:39.177    CC module/fsdev/aio/fsdev_aio_rpc.o
00:05:39.436    CC module/keyring/linux/keyring.o
00:05:39.436    CC module/keyring/linux/keyring_rpc.o
00:05:39.436    CC module/scheduler/dynamic/scheduler_dynamic.o
00:05:39.436    CC module/sock/posix/posix.o
00:05:39.436    CC module/blob/bdev/blob_bdev.o
00:05:39.436    LIB libspdk_env_dpdk_rpc.a
00:05:39.436    CC module/keyring/file/keyring.o
00:05:39.436    CC module/keyring/file/keyring_rpc.o
00:05:39.436    CC module/accel/dsa/accel_dsa.o
00:05:39.436    CC module/accel/dsa/accel_dsa_rpc.o
00:05:39.436    CC module/accel/error/accel_error_rpc.o
00:05:39.436    CC module/accel/iaa/accel_iaa.o
00:05:39.436    CC module/accel/error/accel_error.o
00:05:39.436    CC module/accel/iaa/accel_iaa_rpc.o
00:05:39.436    CC module/accel/ioat/accel_ioat_rpc.o
00:05:39.436    CC module/accel/ioat/accel_ioat.o
00:05:39.436    CC module/scheduler/gscheduler/gscheduler.o
00:05:39.436    SO libspdk_env_dpdk_rpc.so.6.0
00:05:39.436    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:05:39.436    SYMLINK libspdk_env_dpdk_rpc.so
00:05:39.436    LIB libspdk_keyring_linux.a
00:05:39.436    LIB libspdk_keyring_file.a
00:05:39.436    SO libspdk_keyring_linux.so.1.0
00:05:39.436    LIB libspdk_scheduler_gscheduler.a
00:05:39.436    SO libspdk_keyring_file.so.2.0
00:05:39.436    LIB libspdk_scheduler_dpdk_governor.a
00:05:39.436    LIB libspdk_scheduler_dynamic.a
00:05:39.436    SO libspdk_scheduler_gscheduler.so.4.0
00:05:39.436    LIB libspdk_accel_iaa.a
00:05:39.436    SO libspdk_scheduler_dpdk_governor.so.4.0
00:05:39.694    LIB libspdk_accel_ioat.a
00:05:39.694    LIB libspdk_accel_error.a
00:05:39.694    SO libspdk_scheduler_dynamic.so.4.0
00:05:39.694    SYMLINK libspdk_keyring_linux.so
00:05:39.694    SO libspdk_accel_iaa.so.3.0
00:05:39.694    SYMLINK libspdk_keyring_file.so
00:05:39.694    SYMLINK libspdk_scheduler_gscheduler.so
00:05:39.694    SO libspdk_accel_error.so.2.0
00:05:39.694    SO libspdk_accel_ioat.so.6.0
00:05:39.694    SYMLINK libspdk_scheduler_dpdk_governor.so
00:05:39.694    LIB libspdk_accel_dsa.a
00:05:39.694    LIB libspdk_blob_bdev.a
00:05:39.694    SYMLINK libspdk_scheduler_dynamic.so
00:05:39.694    SO libspdk_accel_dsa.so.5.0
00:05:39.694    SYMLINK libspdk_accel_iaa.so
00:05:39.694    SO libspdk_blob_bdev.so.11.0
00:05:39.694    SYMLINK libspdk_accel_error.so
00:05:39.694    SYMLINK libspdk_accel_ioat.so
00:05:39.694    SYMLINK libspdk_accel_dsa.so
00:05:39.694    SYMLINK libspdk_blob_bdev.so
00:05:39.952    LIB libspdk_fsdev_aio.a
00:05:39.952    SO libspdk_fsdev_aio.so.1.0
00:05:40.210    LIB libspdk_sock_posix.a
00:05:40.210    SO libspdk_sock_posix.so.6.0
00:05:40.210    SYMLINK libspdk_fsdev_aio.so
00:05:40.210    CC module/bdev/ftl/bdev_ftl.o
00:05:40.210    CC module/bdev/ftl/bdev_ftl_rpc.o
00:05:40.210    CC module/bdev/null/bdev_null.o
00:05:40.210    CC module/bdev/null/bdev_null_rpc.o
00:05:40.210    CC module/bdev/nvme/bdev_nvme.o
00:05:40.210    CC module/bdev/nvme/bdev_nvme_rpc.o
00:05:40.210    CC module/bdev/nvme/bdev_mdns_client.o
00:05:40.210    CC module/blobfs/bdev/blobfs_bdev.o
00:05:40.210    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:05:40.210    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:05:40.210    CC module/bdev/delay/vbdev_delay.o
00:05:40.210    CC module/bdev/nvme/nvme_rpc.o
00:05:40.210    CC module/bdev/nvme/vbdev_opal_rpc.o
00:05:40.210    CC module/bdev/delay/vbdev_delay_rpc.o
00:05:40.210    CC module/bdev/nvme/vbdev_opal.o
00:05:40.210    CC module/bdev/gpt/gpt.o
00:05:40.210    CC module/bdev/gpt/vbdev_gpt.o
00:05:40.210    CC module/bdev/zone_block/vbdev_zone_block.o
00:05:40.210    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:05:40.210    CC module/bdev/malloc/bdev_malloc.o
00:05:40.210    CC module/bdev/malloc/bdev_malloc_rpc.o
00:05:40.210    CC module/bdev/iscsi/bdev_iscsi.o
00:05:40.210    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:05:40.210    CC module/bdev/aio/bdev_aio.o
00:05:40.210    CC module/bdev/aio/bdev_aio_rpc.o
00:05:40.210    CC module/bdev/lvol/vbdev_lvol.o
00:05:40.210    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:05:40.210    CC module/bdev/split/vbdev_split.o
00:05:40.210    CC module/bdev/error/vbdev_error.o
00:05:40.210    CC module/bdev/split/vbdev_split_rpc.o
00:05:40.210    SYMLINK libspdk_sock_posix.so
00:05:40.210    CC module/bdev/error/vbdev_error_rpc.o
00:05:40.210    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:05:40.210    CC module/bdev/passthru/vbdev_passthru.o
00:05:40.210    CC module/bdev/raid/bdev_raid.o
00:05:40.210    CC module/bdev/raid/bdev_raid_rpc.o
00:05:40.210    CC module/bdev/raid/bdev_raid_sb.o
00:05:40.210    CC module/bdev/raid/raid0.o
00:05:40.210    CC module/bdev/raid/raid1.o
00:05:40.210    CC module/bdev/raid/concat.o
00:05:40.210    CC module/bdev/virtio/bdev_virtio_blk.o
00:05:40.210    CC module/bdev/virtio/bdev_virtio_scsi.o
00:05:40.210    CC module/bdev/virtio/bdev_virtio_rpc.o
00:05:40.469    LIB libspdk_blobfs_bdev.a
00:05:40.469    LIB libspdk_bdev_ftl.a
00:05:40.469    SO libspdk_blobfs_bdev.so.6.0
00:05:40.469    SO libspdk_bdev_ftl.so.6.0
00:05:40.469    LIB libspdk_bdev_gpt.a
00:05:40.469    LIB libspdk_bdev_error.a
00:05:40.469    LIB libspdk_bdev_split.a
00:05:40.469    SYMLINK libspdk_blobfs_bdev.so
00:05:40.469    SO libspdk_bdev_error.so.6.0
00:05:40.469    SO libspdk_bdev_gpt.so.6.0
00:05:40.469    SO libspdk_bdev_split.so.6.0
00:05:40.727    SYMLINK libspdk_bdev_ftl.so
00:05:40.727    LIB libspdk_bdev_passthru.a
00:05:40.727    LIB libspdk_bdev_null.a
00:05:40.727    LIB libspdk_bdev_aio.a
00:05:40.727    SYMLINK libspdk_bdev_gpt.so
00:05:40.727    LIB libspdk_bdev_malloc.a
00:05:40.727    SO libspdk_bdev_passthru.so.6.0
00:05:40.727    SYMLINK libspdk_bdev_error.so
00:05:40.727    SO libspdk_bdev_null.so.6.0
00:05:40.727    SYMLINK libspdk_bdev_split.so
00:05:40.727    LIB libspdk_bdev_iscsi.a
00:05:40.727    LIB libspdk_bdev_delay.a
00:05:40.727    SO libspdk_bdev_aio.so.6.0
00:05:40.727    SO libspdk_bdev_malloc.so.6.0
00:05:40.727    SO libspdk_bdev_iscsi.so.6.0
00:05:40.727    SO libspdk_bdev_delay.so.6.0
00:05:40.727    LIB libspdk_bdev_zone_block.a
00:05:40.727    SYMLINK libspdk_bdev_null.so
00:05:40.727    SYMLINK libspdk_bdev_passthru.so
00:05:40.728    SO libspdk_bdev_zone_block.so.6.0
00:05:40.728    SYMLINK libspdk_bdev_aio.so
00:05:40.728    SYMLINK libspdk_bdev_iscsi.so
00:05:40.728    SYMLINK libspdk_bdev_malloc.so
00:05:40.728    SYMLINK libspdk_bdev_delay.so
00:05:40.728    SYMLINK libspdk_bdev_zone_block.so
00:05:40.728    LIB libspdk_bdev_lvol.a
00:05:40.728    LIB libspdk_bdev_virtio.a
00:05:40.986    SO libspdk_bdev_lvol.so.6.0
00:05:40.986    SO libspdk_bdev_virtio.so.6.0
00:05:40.986    SYMLINK libspdk_bdev_lvol.so
00:05:40.986    SYMLINK libspdk_bdev_virtio.so
00:05:41.245    LIB libspdk_bdev_raid.a
00:05:41.504    SO libspdk_bdev_raid.so.6.0
00:05:41.504    SYMLINK libspdk_bdev_raid.so
00:05:42.882    LIB libspdk_bdev_nvme.a
00:05:42.882    SO libspdk_bdev_nvme.so.7.1
00:05:42.882    SYMLINK libspdk_bdev_nvme.so
00:05:43.449    CC module/event/subsystems/fsdev/fsdev.o
00:05:43.449    CC module/event/subsystems/sock/sock.o
00:05:43.449    CC module/event/subsystems/vmd/vmd.o
00:05:43.449    CC module/event/subsystems/vmd/vmd_rpc.o
00:05:43.449    CC module/event/subsystems/scheduler/scheduler.o
00:05:43.449    CC module/event/subsystems/keyring/keyring.o
00:05:43.449    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:05:43.449    CC module/event/subsystems/iobuf/iobuf.o
00:05:43.449    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:05:43.708    LIB libspdk_event_sock.a
00:05:43.708    LIB libspdk_event_fsdev.a
00:05:43.708    LIB libspdk_event_keyring.a
00:05:43.708    LIB libspdk_event_scheduler.a
00:05:43.708    LIB libspdk_event_vhost_blk.a
00:05:43.708    LIB libspdk_event_vmd.a
00:05:43.708    SO libspdk_event_fsdev.so.1.0
00:05:43.708    SO libspdk_event_keyring.so.1.0
00:05:43.708    SO libspdk_event_sock.so.5.0
00:05:43.708    LIB libspdk_event_iobuf.a
00:05:43.708    SO libspdk_event_scheduler.so.4.0
00:05:43.708    SO libspdk_event_vmd.so.6.0
00:05:43.708    SO libspdk_event_vhost_blk.so.3.0
00:05:43.708    SO libspdk_event_iobuf.so.3.0
00:05:43.708    SYMLINK libspdk_event_fsdev.so
00:05:43.708    SYMLINK libspdk_event_keyring.so
00:05:43.708    SYMLINK libspdk_event_sock.so
00:05:43.708    SYMLINK libspdk_event_vhost_blk.so
00:05:43.708    SYMLINK libspdk_event_scheduler.so
00:05:43.708    SYMLINK libspdk_event_vmd.so
00:05:43.708    SYMLINK libspdk_event_iobuf.so
00:05:44.275    CC module/event/subsystems/accel/accel.o
00:05:44.275    LIB libspdk_event_accel.a
00:05:44.275    SO libspdk_event_accel.so.6.0
00:05:44.534    SYMLINK libspdk_event_accel.so
00:05:44.793    CC module/event/subsystems/bdev/bdev.o
00:05:44.793    LIB libspdk_event_bdev.a
00:05:45.052    SO libspdk_event_bdev.so.6.0
00:05:45.052    SYMLINK libspdk_event_bdev.so
00:05:45.311    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:05:45.311    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:05:45.311    CC module/event/subsystems/scsi/scsi.o
00:05:45.311    CC module/event/subsystems/ublk/ublk.o
00:05:45.311    CC module/event/subsystems/nbd/nbd.o
00:05:45.570    LIB libspdk_event_scsi.a
00:05:45.570    LIB libspdk_event_nbd.a
00:05:45.570    LIB libspdk_event_ublk.a
00:05:45.570    SO libspdk_event_scsi.so.6.0
00:05:45.570    SO libspdk_event_nbd.so.6.0
00:05:45.570    SO libspdk_event_ublk.so.3.0
00:05:45.570    LIB libspdk_event_nvmf.a
00:05:45.570    SYMLINK libspdk_event_scsi.so
00:05:45.570    SO libspdk_event_nvmf.so.6.0
00:05:45.570    SYMLINK libspdk_event_nbd.so
00:05:45.570    SYMLINK libspdk_event_ublk.so
00:05:45.828    SYMLINK libspdk_event_nvmf.so
00:05:45.828    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:05:46.141    CC module/event/subsystems/iscsi/iscsi.o
00:05:46.141    LIB libspdk_event_vhost_scsi.a
00:05:46.141    SO libspdk_event_vhost_scsi.so.3.0
00:05:46.141    LIB libspdk_event_iscsi.a
00:05:46.141    SYMLINK libspdk_event_vhost_scsi.so
00:05:46.141    SO libspdk_event_iscsi.so.6.0
00:05:46.141    SYMLINK libspdk_event_iscsi.so
00:05:46.423    SO libspdk.so.6.0
00:05:46.423    SYMLINK libspdk.so
00:05:46.682    CC app/spdk_top/spdk_top.o
00:05:46.682    CC app/spdk_nvme_identify/identify.o
00:05:46.682    CC app/spdk_lspci/spdk_lspci.o
00:05:46.682    CC app/trace_record/trace_record.o
00:05:46.682    CC app/spdk_nvme_perf/perf.o
00:05:46.682    CC test/rpc_client/rpc_client_test.o
00:05:46.682    CXX app/trace/trace.o
00:05:46.682    CC app/spdk_nvme_discover/discovery_aer.o
00:05:46.682    TEST_HEADER include/spdk/accel.h
00:05:46.682    TEST_HEADER include/spdk/accel_module.h
00:05:46.682    TEST_HEADER include/spdk/assert.h
00:05:46.682    TEST_HEADER include/spdk/barrier.h
00:05:46.682    TEST_HEADER include/spdk/base64.h
00:05:46.682    TEST_HEADER include/spdk/bdev.h
00:05:46.682    TEST_HEADER include/spdk/bdev_zone.h
00:05:46.682    TEST_HEADER include/spdk/bit_array.h
00:05:46.682    TEST_HEADER include/spdk/bdev_module.h
00:05:46.682    TEST_HEADER include/spdk/bit_pool.h
00:05:46.682    TEST_HEADER include/spdk/blob_bdev.h
00:05:46.682    TEST_HEADER include/spdk/blobfs_bdev.h
00:05:46.682    TEST_HEADER include/spdk/blobfs.h
00:05:46.682    TEST_HEADER include/spdk/blob.h
00:05:46.682    TEST_HEADER include/spdk/config.h
00:05:46.682    TEST_HEADER include/spdk/crc16.h
00:05:46.682    TEST_HEADER include/spdk/cpuset.h
00:05:46.682    TEST_HEADER include/spdk/conf.h
00:05:46.682    TEST_HEADER include/spdk/crc64.h
00:05:46.682    CC app/spdk_dd/spdk_dd.o
00:05:46.682    TEST_HEADER include/spdk/crc32.h
00:05:46.682    TEST_HEADER include/spdk/dif.h
00:05:46.682    TEST_HEADER include/spdk/endian.h
00:05:46.682    TEST_HEADER include/spdk/dma.h
00:05:46.682    TEST_HEADER include/spdk/env_dpdk.h
00:05:46.682    TEST_HEADER include/spdk/env.h
00:05:46.682    TEST_HEADER include/spdk/event.h
00:05:46.682    TEST_HEADER include/spdk/fd_group.h
00:05:46.682    TEST_HEADER include/spdk/fd.h
00:05:46.682    TEST_HEADER include/spdk/file.h
00:05:46.682    CC app/iscsi_tgt/iscsi_tgt.o
00:05:46.682    TEST_HEADER include/spdk/fsdev.h
00:05:46.946    TEST_HEADER include/spdk/ftl.h
00:05:46.946    TEST_HEADER include/spdk/fsdev_module.h
00:05:46.946    TEST_HEADER include/spdk/fuse_dispatcher.h
00:05:46.946    TEST_HEADER include/spdk/gpt_spec.h
00:05:46.946    TEST_HEADER include/spdk/hexlify.h
00:05:46.946    TEST_HEADER include/spdk/histogram_data.h
00:05:46.946    TEST_HEADER include/spdk/idxd_spec.h
00:05:46.946    TEST_HEADER include/spdk/idxd.h
00:05:46.946    CC app/nvmf_tgt/nvmf_main.o
00:05:46.946    TEST_HEADER include/spdk/init.h
00:05:46.946    TEST_HEADER include/spdk/ioat.h
00:05:46.946    TEST_HEADER include/spdk/ioat_spec.h
00:05:46.946    TEST_HEADER include/spdk/iscsi_spec.h
00:05:46.946    TEST_HEADER include/spdk/json.h
00:05:46.946    TEST_HEADER include/spdk/jsonrpc.h
00:05:46.946    TEST_HEADER include/spdk/keyring.h
00:05:46.946    CC examples/interrupt_tgt/interrupt_tgt.o
00:05:46.946    TEST_HEADER include/spdk/keyring_module.h
00:05:46.946    TEST_HEADER include/spdk/likely.h
00:05:46.946    TEST_HEADER include/spdk/log.h
00:05:46.946    TEST_HEADER include/spdk/lvol.h
00:05:46.946    TEST_HEADER include/spdk/md5.h
00:05:46.946    TEST_HEADER include/spdk/mmio.h
00:05:46.946    TEST_HEADER include/spdk/memory.h
00:05:46.946    TEST_HEADER include/spdk/nbd.h
00:05:46.946    TEST_HEADER include/spdk/net.h
00:05:46.946    TEST_HEADER include/spdk/notify.h
00:05:46.946    TEST_HEADER include/spdk/nvme.h
00:05:46.946    TEST_HEADER include/spdk/nvme_intel.h
00:05:46.946    TEST_HEADER include/spdk/nvme_ocssd.h
00:05:46.946    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:05:46.946    TEST_HEADER include/spdk/nvme_spec.h
00:05:46.946    TEST_HEADER include/spdk/nvme_zns.h
00:05:46.946    TEST_HEADER include/spdk/nvmf_cmd.h
00:05:46.946    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:05:46.946    TEST_HEADER include/spdk/nvmf.h
00:05:46.946    TEST_HEADER include/spdk/nvmf_spec.h
00:05:46.946    TEST_HEADER include/spdk/nvmf_transport.h
00:05:46.946    TEST_HEADER include/spdk/opal.h
00:05:46.946    TEST_HEADER include/spdk/opal_spec.h
00:05:46.946    TEST_HEADER include/spdk/pci_ids.h
00:05:46.946    TEST_HEADER include/spdk/pipe.h
00:05:46.946    TEST_HEADER include/spdk/queue.h
00:05:46.946    TEST_HEADER include/spdk/reduce.h
00:05:46.946    TEST_HEADER include/spdk/rpc.h
00:05:46.946    TEST_HEADER include/spdk/scsi.h
00:05:46.946    TEST_HEADER include/spdk/scheduler.h
00:05:46.946    TEST_HEADER include/spdk/scsi_spec.h
00:05:46.946    TEST_HEADER include/spdk/sock.h
00:05:46.946    TEST_HEADER include/spdk/stdinc.h
00:05:46.946    TEST_HEADER include/spdk/string.h
00:05:46.946    TEST_HEADER include/spdk/thread.h
00:05:46.946    TEST_HEADER include/spdk/trace.h
00:05:46.946    TEST_HEADER include/spdk/trace_parser.h
00:05:46.946    TEST_HEADER include/spdk/tree.h
00:05:46.946    TEST_HEADER include/spdk/ublk.h
00:05:46.946    TEST_HEADER include/spdk/util.h
00:05:46.946    TEST_HEADER include/spdk/uuid.h
00:05:46.946    TEST_HEADER include/spdk/version.h
00:05:46.946    TEST_HEADER include/spdk/vfio_user_pci.h
00:05:46.946    TEST_HEADER include/spdk/vhost.h
00:05:46.946    TEST_HEADER include/spdk/vmd.h
00:05:46.946    TEST_HEADER include/spdk/vfio_user_spec.h
00:05:46.946    TEST_HEADER include/spdk/xor.h
00:05:46.946    TEST_HEADER include/spdk/zipf.h
00:05:46.946    CXX test/cpp_headers/accel.o
00:05:46.946    CXX test/cpp_headers/accel_module.o
00:05:46.946    CXX test/cpp_headers/assert.o
00:05:46.946    CXX test/cpp_headers/barrier.o
00:05:46.946    CXX test/cpp_headers/base64.o
00:05:46.946    CXX test/cpp_headers/bdev_module.o
00:05:46.946    CXX test/cpp_headers/bdev.o
00:05:46.946    CXX test/cpp_headers/bdev_zone.o
00:05:46.946    CXX test/cpp_headers/bit_array.o
00:05:46.946    CXX test/cpp_headers/blob_bdev.o
00:05:46.946    CXX test/cpp_headers/bit_pool.o
00:05:46.946    CXX test/cpp_headers/blobfs_bdev.o
00:05:46.946    CXX test/cpp_headers/blobfs.o
00:05:46.946    CXX test/cpp_headers/conf.o
00:05:46.946    CXX test/cpp_headers/config.o
00:05:46.946    CXX test/cpp_headers/blob.o
00:05:46.946    CXX test/cpp_headers/cpuset.o
00:05:46.946    CXX test/cpp_headers/crc16.o
00:05:46.946    CXX test/cpp_headers/crc32.o
00:05:46.946    CXX test/cpp_headers/crc64.o
00:05:46.946    CXX test/cpp_headers/dif.o
00:05:46.946    CXX test/cpp_headers/dma.o
00:05:46.946    CXX test/cpp_headers/endian.o
00:05:46.946    CXX test/cpp_headers/env_dpdk.o
00:05:46.946    CXX test/cpp_headers/env.o
00:05:46.946    CXX test/cpp_headers/event.o
00:05:46.946    CXX test/cpp_headers/fd_group.o
00:05:46.946    CXX test/cpp_headers/file.o
00:05:46.946    CXX test/cpp_headers/fd.o
00:05:46.946    CC examples/ioat/perf/perf.o
00:05:46.946    CXX test/cpp_headers/fsdev.o
00:05:46.946    CXX test/cpp_headers/ftl.o
00:05:46.946    CXX test/cpp_headers/fsdev_module.o
00:05:46.946    CXX test/cpp_headers/fuse_dispatcher.o
00:05:46.946    CXX test/cpp_headers/gpt_spec.o
00:05:46.946    CXX test/cpp_headers/hexlify.o
00:05:46.946    CXX test/cpp_headers/histogram_data.o
00:05:46.946    CXX test/cpp_headers/idxd.o
00:05:46.946    CXX test/cpp_headers/init.o
00:05:46.946    CXX test/cpp_headers/idxd_spec.o
00:05:46.946    CXX test/cpp_headers/ioat_spec.o
00:05:46.946    CXX test/cpp_headers/ioat.o
00:05:46.946    CXX test/cpp_headers/iscsi_spec.o
00:05:46.946    CC test/app/jsoncat/jsoncat.o
00:05:46.946    CC app/fio/nvme/fio_plugin.o
00:05:46.946    CC test/thread/poller_perf/poller_perf.o
00:05:46.946    CC examples/ioat/verify/verify.o
00:05:46.946    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:46.946    CC test/app/histogram_perf/histogram_perf.o
00:05:46.946    CC examples/util/zipf/zipf.o
00:05:46.946    CC test/env/memory/memory_ut.o
00:05:46.946    CC test/app/stub/stub.o
00:05:46.946    CC app/spdk_tgt/spdk_tgt.o
00:05:46.946    CC test/env/vtophys/vtophys.o
00:05:46.946    CC app/fio/bdev/fio_plugin.o
00:05:46.946    CC test/env/pci/pci_ut.o
00:05:46.946    CC test/app/bdev_svc/bdev_svc.o
00:05:46.946    CC test/dma/test_dma/test_dma.o
00:05:46.946    LINK spdk_lspci
00:05:47.210    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:05:47.210    CC test/env/mem_callbacks/mem_callbacks.o
00:05:47.210    LINK nvmf_tgt
00:05:47.210    LINK spdk_nvme_discover
00:05:47.210    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:05:47.210    LINK iscsi_tgt
00:05:47.210    LINK interrupt_tgt
00:05:47.210    LINK rpc_client_test
00:05:47.210    LINK poller_perf
00:05:47.210    LINK vtophys
00:05:47.210    LINK jsoncat
00:05:47.472    LINK histogram_perf
00:05:47.472    LINK stub
00:05:47.472    CXX test/cpp_headers/json.o
00:05:47.472    LINK spdk_trace_record
00:05:47.472    LINK bdev_svc
00:05:47.472    CXX test/cpp_headers/jsonrpc.o
00:05:47.472    CXX test/cpp_headers/keyring.o
00:05:47.472    CXX test/cpp_headers/keyring_module.o
00:05:47.472    CXX test/cpp_headers/likely.o
00:05:47.472    LINK env_dpdk_post_init
00:05:47.472    CXX test/cpp_headers/log.o
00:05:47.472    CXX test/cpp_headers/lvol.o
00:05:47.472    LINK zipf
00:05:47.472    CXX test/cpp_headers/md5.o
00:05:47.472    CXX test/cpp_headers/memory.o
00:05:47.472    CXX test/cpp_headers/mmio.o
00:05:47.472    CXX test/cpp_headers/nbd.o
00:05:47.472    CXX test/cpp_headers/net.o
00:05:47.472    CXX test/cpp_headers/notify.o
00:05:47.472    CXX test/cpp_headers/nvme.o
00:05:47.472    CXX test/cpp_headers/nvme_intel.o
00:05:47.472    CXX test/cpp_headers/nvme_ocssd.o
00:05:47.472    CXX test/cpp_headers/nvme_ocssd_spec.o
00:05:47.472    CXX test/cpp_headers/nvme_spec.o
00:05:47.472    CXX test/cpp_headers/nvme_zns.o
00:05:47.472    LINK verify
00:05:47.472    CXX test/cpp_headers/nvmf_cmd.o
00:05:47.472    CXX test/cpp_headers/nvmf_fc_spec.o
00:05:47.472    CXX test/cpp_headers/nvmf.o
00:05:47.472    CXX test/cpp_headers/nvmf_spec.o
00:05:47.472    CXX test/cpp_headers/nvmf_transport.o
00:05:47.472    CXX test/cpp_headers/opal.o
00:05:47.472    CXX test/cpp_headers/opal_spec.o
00:05:47.472    CXX test/cpp_headers/pci_ids.o
00:05:47.472    CXX test/cpp_headers/pipe.o
00:05:47.472    CXX test/cpp_headers/queue.o
00:05:47.472    CXX test/cpp_headers/reduce.o
00:05:47.472    CXX test/cpp_headers/rpc.o
00:05:47.472    CXX test/cpp_headers/scsi.o
00:05:47.472    CXX test/cpp_headers/scheduler.o
00:05:47.472    CXX test/cpp_headers/scsi_spec.o
00:05:47.472    CXX test/cpp_headers/sock.o
00:05:47.472    CXX test/cpp_headers/stdinc.o
00:05:47.472    CXX test/cpp_headers/string.o
00:05:47.472    CXX test/cpp_headers/thread.o
00:05:47.472    CXX test/cpp_headers/trace.o
00:05:47.472    CXX test/cpp_headers/trace_parser.o
00:05:47.472    LINK ioat_perf
00:05:47.472    CXX test/cpp_headers/tree.o
00:05:47.472    LINK spdk_tgt
00:05:47.472    CXX test/cpp_headers/ublk.o
00:05:47.472    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:05:47.472    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:05:47.472    CXX test/cpp_headers/util.o
00:05:47.472    LINK spdk_dd
00:05:47.472    LINK spdk_trace
00:05:47.472    CXX test/cpp_headers/uuid.o
00:05:47.731    CXX test/cpp_headers/version.o
00:05:47.731    CXX test/cpp_headers/vfio_user_pci.o
00:05:47.731    CXX test/cpp_headers/vfio_user_spec.o
00:05:47.731    CXX test/cpp_headers/vhost.o
00:05:47.731    CXX test/cpp_headers/xor.o
00:05:47.731    CXX test/cpp_headers/vmd.o
00:05:47.731    CXX test/cpp_headers/zipf.o
00:05:47.990    LINK pci_ut
00:05:47.990    CC test/event/reactor/reactor.o
00:05:47.990    CC test/event/event_perf/event_perf.o
00:05:47.990    CC test/event/reactor_perf/reactor_perf.o
00:05:47.990    CC examples/sock/hello_world/hello_sock.o
00:05:47.990    LINK nvme_fuzz
00:05:47.990    CC examples/vmd/lsvmd/lsvmd.o
00:05:47.990    LINK spdk_bdev
00:05:47.990    CC examples/vmd/led/led.o
00:05:47.990    LINK test_dma
00:05:47.990    CC test/event/app_repeat/app_repeat.o
00:05:47.990    CC examples/idxd/perf/perf.o
00:05:47.990    CC test/event/scheduler/scheduler.o
00:05:47.990    LINK spdk_nvme
00:05:47.990    LINK mem_callbacks
00:05:47.990    CC examples/thread/thread/thread_ex.o
00:05:47.990    CC app/vhost/vhost.o
00:05:48.249    LINK event_perf
00:05:48.249    LINK lsvmd
00:05:48.249    LINK reactor
00:05:48.249    LINK reactor_perf
00:05:48.249    LINK led
00:05:48.249    LINK app_repeat
00:05:48.249    LINK vhost_fuzz
00:05:48.249    LINK vhost
00:05:48.249    LINK scheduler
00:05:48.249    LINK spdk_nvme_identify
00:05:48.249    LINK thread
00:05:48.249    LINK hello_sock
00:05:48.249    LINK spdk_top
00:05:48.249    LINK spdk_nvme_perf
00:05:48.507    LINK idxd_perf
00:05:48.507    CC test/nvme/reset/reset.o
00:05:48.507    CC test/nvme/err_injection/err_injection.o
00:05:48.507    CC test/nvme/sgl/sgl.o
00:05:48.507    CC test/nvme/aer/aer.o
00:05:48.507    CC test/nvme/boot_partition/boot_partition.o
00:05:48.507    CC test/nvme/overhead/overhead.o
00:05:48.507    CC test/nvme/compliance/nvme_compliance.o
00:05:48.507    CC test/nvme/fused_ordering/fused_ordering.o
00:05:48.507    CC test/nvme/startup/startup.o
00:05:48.508    CC test/nvme/fdp/fdp.o
00:05:48.508    CC test/nvme/simple_copy/simple_copy.o
00:05:48.508    CC test/nvme/reserve/reserve.o
00:05:48.508    CC test/nvme/doorbell_aers/doorbell_aers.o
00:05:48.508    CC test/nvme/cuse/cuse.o
00:05:48.508    CC test/nvme/connect_stress/connect_stress.o
00:05:48.508    CC test/nvme/e2edp/nvme_dp.o
00:05:48.508    CC test/blobfs/mkfs/mkfs.o
00:05:48.508    CC test/accel/dif/dif.o
00:05:48.508    CC test/lvol/esnap/esnap.o
00:05:48.765    LINK memory_ut
00:05:48.765    CC examples/nvme/hello_world/hello_world.o
00:05:48.765    CC examples/nvme/cmb_copy/cmb_copy.o
00:05:48.765    CC examples/nvme/reconnect/reconnect.o
00:05:48.765    CC examples/accel/perf/accel_perf.o
00:05:48.765    CC examples/nvme/nvme_manage/nvme_manage.o
00:05:48.765    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:05:48.765    CC examples/nvme/arbitration/arbitration.o
00:05:48.765    CC examples/nvme/hotplug/hotplug.o
00:05:48.765    LINK boot_partition
00:05:48.765    CC examples/nvme/abort/abort.o
00:05:48.765    LINK err_injection
00:05:48.765    LINK startup
00:05:48.765    CC examples/blob/hello_world/hello_blob.o
00:05:48.765    CC examples/fsdev/hello_world/hello_fsdev.o
00:05:48.765    CC examples/blob/cli/blobcli.o
00:05:48.765    LINK connect_stress
00:05:48.765    LINK fused_ordering
00:05:48.765    LINK doorbell_aers
00:05:48.765    LINK mkfs
00:05:48.765    LINK reserve
00:05:48.765    LINK reset
00:05:48.765    LINK simple_copy
00:05:48.765    LINK aer
00:05:48.765    LINK nvme_dp
00:05:48.765    LINK overhead
00:05:49.023    LINK pmr_persistence
00:05:49.023    LINK sgl
00:05:49.023    LINK nvme_compliance
00:05:49.023    LINK cmb_copy
00:05:49.023    LINK fdp
00:05:49.023    LINK hello_world
00:05:49.023    LINK hotplug
00:05:49.023    LINK hello_blob
00:05:49.023    LINK hello_fsdev
00:05:49.023    LINK arbitration
00:05:49.023    LINK reconnect
00:05:49.023    LINK abort
00:05:49.281    LINK nvme_manage
00:05:49.281    LINK blobcli
00:05:49.281    LINK accel_perf
00:05:49.282    LINK dif
00:05:49.539    LINK iscsi_fuzz
00:05:49.798    LINK cuse
00:05:49.798    CC examples/bdev/hello_world/hello_bdev.o
00:05:49.798    CC examples/bdev/bdevperf/bdevperf.o
00:05:49.798    CC test/bdev/bdevio/bdevio.o
00:05:50.056    LINK hello_bdev
00:05:50.314    LINK bdevio
00:05:50.573    LINK bdevperf
00:05:51.139    CC examples/nvmf/nvmf/nvmf.o
00:05:51.398    LINK nvmf
00:05:53.931    LINK esnap
00:05:53.931  
00:05:53.931  real	1m2.005s
00:05:53.931  user	8m46.041s
00:05:53.931  sys	3m24.929s
00:05:53.931   10:33:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:53.931   10:33:43 make -- common/autotest_common.sh@10 -- $ set +x
00:05:53.931  ************************************
00:05:53.931  END TEST make
00:05:53.931  ************************************
00:05:53.931   10:33:43  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:05:53.931   10:33:43  -- pm/common@29 -- $ signal_monitor_resources TERM
00:05:53.931   10:33:43  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:05:53.931   10:33:43  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:53.931   10:33:43  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:05:53.931   10:33:43  -- pm/common@44 -- $ pid=1760088
00:05:53.931   10:33:43  -- pm/common@50 -- $ kill -TERM 1760088
00:05:53.931   10:33:43  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:53.931   10:33:43  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:05:53.931   10:33:43  -- pm/common@44 -- $ pid=1760090
00:05:53.931   10:33:43  -- pm/common@50 -- $ kill -TERM 1760090
00:05:53.931   10:33:43  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:53.931   10:33:43  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:05:53.931   10:33:43  -- pm/common@44 -- $ pid=1760092
00:05:53.931   10:33:43  -- pm/common@50 -- $ kill -TERM 1760092
00:05:53.931   10:33:43  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:53.932   10:33:43  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:05:53.932   10:33:43  -- pm/common@44 -- $ pid=1760119
00:05:53.932   10:33:43  -- pm/common@50 -- $ sudo -E kill -TERM 1760119
00:05:53.932   10:33:43  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:05:53.932   10:33:43  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/vhost-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf
00:05:53.932    10:33:43  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:53.932     10:33:43  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:53.932     10:33:43  -- common/autotest_common.sh@1693 -- # lcov --version
00:05:54.190    10:33:43  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:54.190    10:33:43  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:54.190    10:33:43  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:54.190    10:33:43  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:54.190    10:33:43  -- scripts/common.sh@336 -- # IFS=.-:
00:05:54.190    10:33:43  -- scripts/common.sh@336 -- # read -ra ver1
00:05:54.190    10:33:43  -- scripts/common.sh@337 -- # IFS=.-:
00:05:54.190    10:33:43  -- scripts/common.sh@337 -- # read -ra ver2
00:05:54.190    10:33:43  -- scripts/common.sh@338 -- # local 'op=<'
00:05:54.190    10:33:43  -- scripts/common.sh@340 -- # ver1_l=2
00:05:54.190    10:33:43  -- scripts/common.sh@341 -- # ver2_l=1
00:05:54.190    10:33:43  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:54.190    10:33:43  -- scripts/common.sh@344 -- # case "$op" in
00:05:54.190    10:33:43  -- scripts/common.sh@345 -- # : 1
00:05:54.190    10:33:43  -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:54.190    10:33:43  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:54.190     10:33:43  -- scripts/common.sh@365 -- # decimal 1
00:05:54.190     10:33:43  -- scripts/common.sh@353 -- # local d=1
00:05:54.190     10:33:43  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:54.190     10:33:43  -- scripts/common.sh@355 -- # echo 1
00:05:54.190    10:33:43  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:54.190     10:33:43  -- scripts/common.sh@366 -- # decimal 2
00:05:54.190     10:33:43  -- scripts/common.sh@353 -- # local d=2
00:05:54.191     10:33:43  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:54.191     10:33:43  -- scripts/common.sh@355 -- # echo 2
00:05:54.191    10:33:43  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:54.191    10:33:43  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:54.191    10:33:43  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:54.191    10:33:43  -- scripts/common.sh@368 -- # return 0
00:05:54.191    10:33:43  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:54.191    10:33:43  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:54.191  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:54.191  		--rc genhtml_branch_coverage=1
00:05:54.191  		--rc genhtml_function_coverage=1
00:05:54.191  		--rc genhtml_legend=1
00:05:54.191  		--rc geninfo_all_blocks=1
00:05:54.191  		--rc geninfo_unexecuted_blocks=1
00:05:54.191  		
00:05:54.191  		'
00:05:54.191    10:33:43  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:54.191  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:54.191  		--rc genhtml_branch_coverage=1
00:05:54.191  		--rc genhtml_function_coverage=1
00:05:54.191  		--rc genhtml_legend=1
00:05:54.191  		--rc geninfo_all_blocks=1
00:05:54.191  		--rc geninfo_unexecuted_blocks=1
00:05:54.191  		
00:05:54.191  		'
00:05:54.191    10:33:43  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:54.191  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:54.191  		--rc genhtml_branch_coverage=1
00:05:54.191  		--rc genhtml_function_coverage=1
00:05:54.191  		--rc genhtml_legend=1
00:05:54.191  		--rc geninfo_all_blocks=1
00:05:54.191  		--rc geninfo_unexecuted_blocks=1
00:05:54.191  		
00:05:54.191  		'
00:05:54.191    10:33:43  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:54.191  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:54.191  		--rc genhtml_branch_coverage=1
00:05:54.191  		--rc genhtml_function_coverage=1
00:05:54.191  		--rc genhtml_legend=1
00:05:54.191  		--rc geninfo_all_blocks=1
00:05:54.191  		--rc geninfo_unexecuted_blocks=1
00:05:54.191  		
00:05:54.191  		'
00:05:54.191   10:33:43  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/nvmf/common.sh
00:05:54.191     10:33:43  -- nvmf/common.sh@7 -- # uname -s
00:05:54.191    10:33:43  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:54.191    10:33:43  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:54.191    10:33:43  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:54.191    10:33:43  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:54.191    10:33:43  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:54.191    10:33:43  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:54.191    10:33:43  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:54.191    10:33:43  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:54.191    10:33:43  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:54.191     10:33:43  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:54.191    10:33:43  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c
00:05:54.191    10:33:43  -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c
00:05:54.191    10:33:43  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:54.191    10:33:43  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:54.191    10:33:43  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:54.191    10:33:43  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:54.191    10:33:43  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:05:54.191     10:33:43  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:54.191     10:33:43  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:54.191     10:33:43  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:54.191     10:33:43  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:54.191      10:33:43  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:54.191      10:33:43  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:54.191      10:33:43  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:54.191      10:33:43  -- paths/export.sh@5 -- # export PATH
00:05:54.191      10:33:43  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:54.191    10:33:43  -- nvmf/common.sh@51 -- # : 0
00:05:54.191    10:33:43  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:54.191    10:33:43  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:54.191    10:33:43  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:54.191    10:33:43  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:54.191    10:33:43  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:54.191    10:33:43  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:54.191  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:54.191    10:33:43  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:54.191    10:33:43  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:54.191    10:33:43  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:54.191   10:33:43  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:54.191    10:33:43  -- spdk/autotest.sh@32 -- # uname -s
00:05:54.191   10:33:43  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:54.191   10:33:43  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:05:54.191   10:33:43  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/coredumps
00:05:54.191   10:33:43  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:05:54.191   10:33:43  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/coredumps
00:05:54.191   10:33:43  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:54.191    10:33:43  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:54.191   10:33:43  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:05:54.191   10:33:43  -- spdk/autotest.sh@48 -- # udevadm_pid=1821244
00:05:54.191   10:33:43  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:54.191   10:33:43  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:05:54.191   10:33:43  -- pm/common@17 -- # local monitor
00:05:54.191   10:33:43  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:54.191   10:33:43  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:54.191   10:33:43  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:54.191    10:33:43  -- pm/common@21 -- # date +%s
00:05:54.191   10:33:43  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:54.191    10:33:43  -- pm/common@21 -- # date +%s
00:05:54.191   10:33:43  -- pm/common@25 -- # sleep 1
00:05:54.191    10:33:43  -- pm/common@21 -- # date +%s
00:05:54.191   10:33:43  -- pm/common@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008823
00:05:54.191    10:33:43  -- pm/common@21 -- # date +%s
00:05:54.191   10:33:43  -- pm/common@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008823
00:05:54.191   10:33:43  -- pm/common@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008823
00:05:54.191   10:33:43  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008823
00:05:54.191  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008823_collect-cpu-load.pm.log
00:05:54.191  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008823_collect-cpu-temp.pm.log
00:05:54.191  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008823_collect-vmstat.pm.log
00:05:54.191  Redirecting to /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732008823_collect-bmc-pm.bmc.pm.log
00:05:55.127   10:33:44  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:05:55.127   10:33:44  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:05:55.127   10:33:44  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:55.127   10:33:44  -- common/autotest_common.sh@10 -- # set +x
00:05:55.127   10:33:44  -- spdk/autotest.sh@59 -- # create_test_list
00:05:55.127   10:33:44  -- common/autotest_common.sh@752 -- # xtrace_disable
00:05:55.127   10:33:44  -- common/autotest_common.sh@10 -- # set +x
00:05:55.127     10:33:44  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/autotest.sh
00:05:55.127    10:33:44  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk
00:05:55.127   10:33:44  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:05:55.127   10:33:44  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vhost-phy-autotest/spdk/../output
00:05:55.127   10:33:44  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vhost-phy-autotest/spdk
00:05:55.127   10:33:44  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:05:55.127    10:33:44  -- common/autotest_common.sh@1457 -- # uname
00:05:55.127   10:33:44  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:05:55.127   10:33:44  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:05:55.127    10:33:44  -- common/autotest_common.sh@1477 -- # uname
00:05:55.127   10:33:44  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:05:55.127   10:33:44  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:05:55.127   10:33:44  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:05:55.386  lcov: LCOV version 1.15
00:05:55.386   10:33:44  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vhost-phy-autotest/spdk -o /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/cov_base.info
00:06:17.320  /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:06:17.320  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:06:20.609   10:34:10  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:06:20.609   10:34:10  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:20.609   10:34:10  -- common/autotest_common.sh@10 -- # set +x
00:06:20.609   10:34:10  -- spdk/autotest.sh@78 -- # rm -f
00:06:20.609   10:34:10  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh reset
00:06:23.895  0000:5e:00.0 (144d a80a): Already using the nvme driver
00:06:23.895  0000:af:00.0 (8086 2701): Already using the nvme driver
00:06:23.895  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:b0:00.0 (8086 2701): Already using the nvme driver
00:06:23.895  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:06:23.895  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:06:24.154  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:06:24.154   10:34:13  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:06:24.154   10:34:13  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:24.154   10:34:13  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:24.154   10:34:13  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:06:24.154   10:34:13  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:24.154   10:34:13  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:06:24.154   10:34:13  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:24.154   10:34:13  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:24.154   10:34:13  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:24.154   10:34:13  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:24.154   10:34:13  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1
00:06:24.154   10:34:13  -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:06:24.154   10:34:13  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:06:24.154   10:34:13  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:24.154   10:34:13  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:24.154   10:34:13  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1
00:06:24.154   10:34:13  -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:06:24.154   10:34:13  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:06:24.154   10:34:13  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:24.154   10:34:13  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:06:24.154   10:34:13  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:24.154   10:34:13  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:24.154   10:34:13  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:06:24.154   10:34:13  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:06:24.154   10:34:13  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:06:24.154  No valid GPT data, bailing
00:06:24.154    10:34:13  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:24.154   10:34:13  -- scripts/common.sh@394 -- # pt=
00:06:24.154   10:34:13  -- scripts/common.sh@395 -- # return 1
00:06:24.154   10:34:13  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:06:24.154  1+0 records in
00:06:24.154  1+0 records out
00:06:24.154  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495757 s, 212 MB/s
00:06:24.154   10:34:13  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:24.154   10:34:13  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:24.154   10:34:13  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:06:24.154   10:34:13  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:06:24.154   10:34:13  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:06:24.154  No valid GPT data, bailing
00:06:24.154    10:34:13  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:06:24.154   10:34:13  -- scripts/common.sh@394 -- # pt=
00:06:24.154   10:34:13  -- scripts/common.sh@395 -- # return 1
00:06:24.154   10:34:13  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:06:24.154  1+0 records in
00:06:24.154  1+0 records out
00:06:24.154  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00221885 s, 473 MB/s
00:06:24.154   10:34:13  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:24.154   10:34:13  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:24.154   10:34:13  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1
00:06:24.154   10:34:13  -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt
00:06:24.154   10:34:13  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1
00:06:24.414  No valid GPT data, bailing
00:06:24.414    10:34:13  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1
00:06:24.414   10:34:13  -- scripts/common.sh@394 -- # pt=
00:06:24.414   10:34:13  -- scripts/common.sh@395 -- # return 1
00:06:24.414   10:34:13  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1
00:06:24.414  1+0 records in
00:06:24.414  1+0 records out
00:06:24.414  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422384 s, 248 MB/s
00:06:24.414   10:34:13  -- spdk/autotest.sh@105 -- # sync
00:06:24.414   10:34:13  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:06:24.414   10:34:13  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:06:24.414    10:34:13  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:29.714    10:34:18  -- spdk/autotest.sh@111 -- # uname -s
00:06:29.714   10:34:18  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:29.714   10:34:18  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:29.714   10:34:18  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh status
00:06:32.328  Hugepages
00:06:32.328  node     hugesize     free /  total
00:06:32.328  node0   1048576kB        0 /      0
00:06:32.328  node0      2048kB        0 /      0
00:06:32.328  node1   1048576kB        0 /      0
00:06:32.328  node1      2048kB        0 /      0
00:06:32.328  
00:06:32.328  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:32.328  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:06:32.588  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:06:32.588  NVMe                      0000:5e:00.0    144d   a80a   0       nvme             nvme0      nvme0n1
00:06:32.588  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:06:32.588  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:06:32.848  NVMe                      0000:af:00.0    8086   2701   1       nvme             nvme1      nvme1n1
00:06:32.848  NVMe                      0000:b0:00.0    8086   2701   1       nvme             nvme2      nvme2n1
00:06:32.848    10:34:22  -- spdk/autotest.sh@117 -- # uname -s
00:06:32.848   10:34:22  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:32.848   10:34:22  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:32.848   10:34:22  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh
00:06:36.138  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:36.138  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:af:00.0 (8086 2701): nvme -> vfio-pci
00:06:36.397  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:b0:00.0 (8086 2701): nvme -> vfio-pci
00:06:36.397  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:36.397  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:38.384  0000:5e:00.0 (144d a80a): nvme -> vfio-pci
00:06:38.384   10:34:27  -- common/autotest_common.sh@1517 -- # sleep 1
00:06:39.320   10:34:28  -- common/autotest_common.sh@1518 -- # bdfs=()
00:06:39.320   10:34:28  -- common/autotest_common.sh@1518 -- # local bdfs
00:06:39.320   10:34:28  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:06:39.320    10:34:28  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:06:39.320    10:34:28  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:39.320    10:34:28  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:39.320    10:34:28  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:39.320     10:34:28  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:39.320     10:34:28  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:39.320    10:34:28  -- common/autotest_common.sh@1500 -- # (( 3 == 0 ))
00:06:39.320    10:34:28  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0
00:06:39.320   10:34:28  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh reset
00:06:42.608  Waiting for block devices as requested
00:06:42.608  0000:5e:00.0 (144d a80a): vfio-pci -> nvme
00:06:42.608  0000:af:00.0 (8086 2701): vfio-pci -> nvme
00:06:42.608  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:06:42.608  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:06:42.608  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:06:42.608  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:06:42.868  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:06:42.868  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:06:42.868  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:06:43.127  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:06:43.127  0000:b0:00.0 (8086 2701): vfio-pci -> nvme
00:06:43.127  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:06:43.387  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:06:43.387  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:06:43.387  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:06:43.645  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:06:43.645  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:06:43.645  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:06:43.905  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:06:43.905   10:34:33  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:43.905    10:34:33  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0
00:06:43.905     10:34:33  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2
00:06:43.905     10:34:33  -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme
00:06:43.905    10:34:33  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:06:43.905    10:34:33  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]]
00:06:43.905     10:34:33  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:06:43.905    10:34:33  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:06:43.905   10:34:33  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:06:43.905   10:34:33  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:06:43.905    10:34:33  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:06:43.905    10:34:33  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:43.905    10:34:33  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:43.905   10:34:33  -- common/autotest_common.sh@1531 -- # oacs=' 0x5f'
00:06:43.905   10:34:33  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:43.905   10:34:33  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:43.905    10:34:33  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:43.905    10:34:33  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:43.905    10:34:33  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:43.905   10:34:33  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:43.905   10:34:33  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:43.905   10:34:33  -- common/autotest_common.sh@1543 -- # continue
00:06:43.905   10:34:33  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:43.905    10:34:33  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0
00:06:43.905     10:34:33  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2
00:06:43.905     10:34:33  -- common/autotest_common.sh@1487 -- # grep 0000:af:00.0/nvme/nvme
00:06:43.905    10:34:33  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1
00:06:43.905    10:34:33  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]]
00:06:43.905     10:34:33  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1
00:06:43.905    10:34:33  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1
00:06:43.905   10:34:33  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1
00:06:43.905   10:34:33  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]]
00:06:43.905    10:34:33  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1
00:06:43.905    10:34:33  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:43.905    10:34:33  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:43.905   10:34:33  -- common/autotest_common.sh@1531 -- # oacs=' 0x7'
00:06:43.905   10:34:33  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=0
00:06:43.905   10:34:33  -- common/autotest_common.sh@1534 -- # [[ 0 -ne 0 ]]
00:06:43.905   10:34:33  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:43.905    10:34:33  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0
00:06:43.905     10:34:33  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2
00:06:43.905     10:34:33  -- common/autotest_common.sh@1487 -- # grep 0000:b0:00.0/nvme/nvme
00:06:44.165    10:34:33  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2
00:06:44.165    10:34:33  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]]
00:06:44.165     10:34:33  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2
00:06:44.165    10:34:33  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2
00:06:44.165   10:34:33  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2
00:06:44.165   10:34:33  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]]
00:06:44.165    10:34:33  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2
00:06:44.165    10:34:33  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:44.165    10:34:33  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:44.165   10:34:33  -- common/autotest_common.sh@1531 -- # oacs=' 0x7'
00:06:44.165   10:34:33  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=0
00:06:44.165   10:34:33  -- common/autotest_common.sh@1534 -- # [[ 0 -ne 0 ]]
00:06:44.165   10:34:33  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:44.165   10:34:33  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:44.165   10:34:33  -- common/autotest_common.sh@10 -- # set +x
00:06:44.165   10:34:33  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:44.165   10:34:33  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:44.165   10:34:33  -- common/autotest_common.sh@10 -- # set +x
00:06:44.165   10:34:33  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh
00:06:48.356  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:af:00.0 (8086 2701): nvme -> vfio-pci
00:06:48.356  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:5e:00.0 (144d a80a): nvme -> vfio-pci
00:06:48.356  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:b0:00.0 (8086 2701): nvme -> vfio-pci
00:06:48.356  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:06:48.356  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:06:48.356   10:34:37  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:48.356   10:34:37  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:48.356   10:34:37  -- common/autotest_common.sh@10 -- # set +x
00:06:48.356   10:34:37  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:48.356   10:34:37  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:48.356    10:34:37  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:48.356    10:34:37  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:48.356    10:34:37  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:48.356    10:34:37  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:48.356    10:34:37  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:48.356     10:34:37  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:48.356     10:34:37  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:48.356     10:34:37  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:48.356     10:34:37  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:48.356      10:34:37  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:48.356      10:34:37  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh
00:06:48.356     10:34:37  -- common/autotest_common.sh@1500 -- # (( 3 == 0 ))
00:06:48.356     10:34:37  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0
00:06:48.356    10:34:37  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:48.356     10:34:37  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device
00:06:48.356    10:34:37  -- common/autotest_common.sh@1566 -- # device=0xa80a
00:06:48.356    10:34:37  -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]]
00:06:48.356    10:34:37  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:48.357     10:34:37  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:af:00.0/device
00:06:48.357    10:34:37  -- common/autotest_common.sh@1566 -- # device=0x2701
00:06:48.357    10:34:37  -- common/autotest_common.sh@1567 -- # [[ 0x2701 == \0\x\0\a\5\4 ]]
00:06:48.357    10:34:37  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:48.357     10:34:37  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device
00:06:48.357    10:34:37  -- common/autotest_common.sh@1566 -- # device=0x2701
00:06:48.357    10:34:37  -- common/autotest_common.sh@1567 -- # [[ 0x2701 == \0\x\0\a\5\4 ]]
00:06:48.357    10:34:37  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:06:48.357    10:34:37  -- common/autotest_common.sh@1572 -- # return 0
00:06:48.357   10:34:37  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:06:48.357   10:34:37  -- common/autotest_common.sh@1580 -- # return 0
00:06:48.357   10:34:37  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:06:48.357   10:34:37  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:06:48.357   10:34:37  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:48.357   10:34:37  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:48.357   10:34:37  -- spdk/autotest.sh@149 -- # timing_enter lib
00:06:48.357   10:34:37  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:48.357   10:34:37  -- common/autotest_common.sh@10 -- # set +x
00:06:48.357   10:34:37  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:06:48.357   10:34:37  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/env.sh
00:06:48.357   10:34:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:48.357   10:34:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:48.357   10:34:37  -- common/autotest_common.sh@10 -- # set +x
00:06:48.357  ************************************
00:06:48.357  START TEST env
00:06:48.357  ************************************
00:06:48.357   10:34:37 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/env.sh
00:06:48.357  * Looking for test storage...
00:06:48.357  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:48.357     10:34:38 env -- common/autotest_common.sh@1693 -- # lcov --version
00:06:48.357     10:34:38 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:48.357    10:34:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:48.357    10:34:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:48.357    10:34:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:48.357    10:34:38 env -- scripts/common.sh@336 -- # IFS=.-:
00:06:48.357    10:34:38 env -- scripts/common.sh@336 -- # read -ra ver1
00:06:48.357    10:34:38 env -- scripts/common.sh@337 -- # IFS=.-:
00:06:48.357    10:34:38 env -- scripts/common.sh@337 -- # read -ra ver2
00:06:48.357    10:34:38 env -- scripts/common.sh@338 -- # local 'op=<'
00:06:48.357    10:34:38 env -- scripts/common.sh@340 -- # ver1_l=2
00:06:48.357    10:34:38 env -- scripts/common.sh@341 -- # ver2_l=1
00:06:48.357    10:34:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:48.357    10:34:38 env -- scripts/common.sh@344 -- # case "$op" in
00:06:48.357    10:34:38 env -- scripts/common.sh@345 -- # : 1
00:06:48.357    10:34:38 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:48.357    10:34:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:48.357     10:34:38 env -- scripts/common.sh@365 -- # decimal 1
00:06:48.357     10:34:38 env -- scripts/common.sh@353 -- # local d=1
00:06:48.357     10:34:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:48.357     10:34:38 env -- scripts/common.sh@355 -- # echo 1
00:06:48.357    10:34:38 env -- scripts/common.sh@365 -- # ver1[v]=1
00:06:48.357     10:34:38 env -- scripts/common.sh@366 -- # decimal 2
00:06:48.357     10:34:38 env -- scripts/common.sh@353 -- # local d=2
00:06:48.357     10:34:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:48.357     10:34:38 env -- scripts/common.sh@355 -- # echo 2
00:06:48.357    10:34:38 env -- scripts/common.sh@366 -- # ver2[v]=2
00:06:48.357    10:34:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:48.357    10:34:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:48.357    10:34:38 env -- scripts/common.sh@368 -- # return 0
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:48.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:48.357  		--rc genhtml_branch_coverage=1
00:06:48.357  		--rc genhtml_function_coverage=1
00:06:48.357  		--rc genhtml_legend=1
00:06:48.357  		--rc geninfo_all_blocks=1
00:06:48.357  		--rc geninfo_unexecuted_blocks=1
00:06:48.357  		
00:06:48.357  		'
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:48.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:48.357  		--rc genhtml_branch_coverage=1
00:06:48.357  		--rc genhtml_function_coverage=1
00:06:48.357  		--rc genhtml_legend=1
00:06:48.357  		--rc geninfo_all_blocks=1
00:06:48.357  		--rc geninfo_unexecuted_blocks=1
00:06:48.357  		
00:06:48.357  		'
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:48.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:48.357  		--rc genhtml_branch_coverage=1
00:06:48.357  		--rc genhtml_function_coverage=1
00:06:48.357  		--rc genhtml_legend=1
00:06:48.357  		--rc geninfo_all_blocks=1
00:06:48.357  		--rc geninfo_unexecuted_blocks=1
00:06:48.357  		
00:06:48.357  		'
00:06:48.357    10:34:38 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:48.357  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:48.357  		--rc genhtml_branch_coverage=1
00:06:48.357  		--rc genhtml_function_coverage=1
00:06:48.357  		--rc genhtml_legend=1
00:06:48.357  		--rc geninfo_all_blocks=1
00:06:48.357  		--rc geninfo_unexecuted_blocks=1
00:06:48.357  		
00:06:48.357  		'
00:06:48.357   10:34:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/memory/memory_ut
00:06:48.357   10:34:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:48.357   10:34:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:48.357   10:34:38 env -- common/autotest_common.sh@10 -- # set +x
00:06:48.357  ************************************
00:06:48.357  START TEST env_memory
00:06:48.357  ************************************
00:06:48.357   10:34:38 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/memory/memory_ut
00:06:48.616  
00:06:48.616  
00:06:48.616       CUnit - A unit testing framework for C - Version 2.1-3
00:06:48.616       http://cunit.sourceforge.net/
00:06:48.616  
00:06:48.616  
00:06:48.616  Suite: memory
00:06:48.616    Test: alloc and free memory map ...[2024-11-19 10:34:38.199432] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:48.616  passed
00:06:48.616    Test: mem map translation ...[2024-11-19 10:34:38.235012] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:48.616  [2024-11-19 10:34:38.235042] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:48.616  [2024-11-19 10:34:38.235097] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:48.616  [2024-11-19 10:34:38.235116] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:48.616  passed
00:06:48.616    Test: mem map registration ...[2024-11-19 10:34:38.290712] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:06:48.616  [2024-11-19 10:34:38.290740] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:06:48.616  passed
00:06:48.617    Test: mem map adjacent registrations ...passed
00:06:48.617  
00:06:48.617  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:48.617                suites      1      1    n/a      0        0
00:06:48.617                 tests      4      4      4      0        0
00:06:48.617               asserts    152    152    152      0      n/a
00:06:48.617  
00:06:48.617  Elapsed time =    0.203 seconds
00:06:48.617  
00:06:48.617  real	0m0.244s
00:06:48.617  user	0m0.216s
00:06:48.617  sys	0m0.027s
00:06:48.617   10:34:38 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:48.617   10:34:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:06:48.617  ************************************
00:06:48.617  END TEST env_memory
00:06:48.617  ************************************
00:06:48.876   10:34:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/vtophys/vtophys
00:06:48.876   10:34:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:48.876   10:34:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:48.876   10:34:38 env -- common/autotest_common.sh@10 -- # set +x
00:06:48.876  ************************************
00:06:48.876  START TEST env_vtophys
00:06:48.876  ************************************
00:06:48.876   10:34:38 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/vtophys/vtophys
00:06:48.876  EAL: lib.eal log level changed from notice to debug
00:06:48.876  EAL: Detected lcore 0 as core 0 on socket 0
00:06:48.876  EAL: Detected lcore 1 as core 1 on socket 0
00:06:48.876  EAL: Detected lcore 2 as core 2 on socket 0
00:06:48.876  EAL: Detected lcore 3 as core 3 on socket 0
00:06:48.876  EAL: Detected lcore 4 as core 4 on socket 0
00:06:48.876  EAL: Detected lcore 5 as core 8 on socket 0
00:06:48.876  EAL: Detected lcore 6 as core 9 on socket 0
00:06:48.876  EAL: Detected lcore 7 as core 10 on socket 0
00:06:48.876  EAL: Detected lcore 8 as core 11 on socket 0
00:06:48.876  EAL: Detected lcore 9 as core 16 on socket 0
00:06:48.876  EAL: Detected lcore 10 as core 17 on socket 0
00:06:48.876  EAL: Detected lcore 11 as core 18 on socket 0
00:06:48.876  EAL: Detected lcore 12 as core 19 on socket 0
00:06:48.876  EAL: Detected lcore 13 as core 20 on socket 0
00:06:48.876  EAL: Detected lcore 14 as core 24 on socket 0
00:06:48.876  EAL: Detected lcore 15 as core 25 on socket 0
00:06:48.876  EAL: Detected lcore 16 as core 26 on socket 0
00:06:48.876  EAL: Detected lcore 17 as core 27 on socket 0
00:06:48.876  EAL: Detected lcore 18 as core 0 on socket 1
00:06:48.876  EAL: Detected lcore 19 as core 1 on socket 1
00:06:48.876  EAL: Detected lcore 20 as core 2 on socket 1
00:06:48.876  EAL: Detected lcore 21 as core 3 on socket 1
00:06:48.876  EAL: Detected lcore 22 as core 4 on socket 1
00:06:48.876  EAL: Detected lcore 23 as core 8 on socket 1
00:06:48.876  EAL: Detected lcore 24 as core 9 on socket 1
00:06:48.876  EAL: Detected lcore 25 as core 10 on socket 1
00:06:48.876  EAL: Detected lcore 26 as core 11 on socket 1
00:06:48.876  EAL: Detected lcore 27 as core 16 on socket 1
00:06:48.876  EAL: Detected lcore 28 as core 17 on socket 1
00:06:48.876  EAL: Detected lcore 29 as core 18 on socket 1
00:06:48.876  EAL: Detected lcore 30 as core 19 on socket 1
00:06:48.876  EAL: Detected lcore 31 as core 20 on socket 1
00:06:48.876  EAL: Detected lcore 32 as core 24 on socket 1
00:06:48.876  EAL: Detected lcore 33 as core 25 on socket 1
00:06:48.876  EAL: Detected lcore 34 as core 26 on socket 1
00:06:48.876  EAL: Detected lcore 35 as core 27 on socket 1
00:06:48.876  EAL: Detected lcore 36 as core 0 on socket 0
00:06:48.876  EAL: Detected lcore 37 as core 1 on socket 0
00:06:48.876  EAL: Detected lcore 38 as core 2 on socket 0
00:06:48.876  EAL: Detected lcore 39 as core 3 on socket 0
00:06:48.876  EAL: Detected lcore 40 as core 4 on socket 0
00:06:48.876  EAL: Detected lcore 41 as core 8 on socket 0
00:06:48.876  EAL: Detected lcore 42 as core 9 on socket 0
00:06:48.876  EAL: Detected lcore 43 as core 10 on socket 0
00:06:48.876  EAL: Detected lcore 44 as core 11 on socket 0
00:06:48.876  EAL: Detected lcore 45 as core 16 on socket 0
00:06:48.876  EAL: Detected lcore 46 as core 17 on socket 0
00:06:48.876  EAL: Detected lcore 47 as core 18 on socket 0
00:06:48.876  EAL: Detected lcore 48 as core 19 on socket 0
00:06:48.876  EAL: Detected lcore 49 as core 20 on socket 0
00:06:48.876  EAL: Detected lcore 50 as core 24 on socket 0
00:06:48.876  EAL: Detected lcore 51 as core 25 on socket 0
00:06:48.876  EAL: Detected lcore 52 as core 26 on socket 0
00:06:48.876  EAL: Detected lcore 53 as core 27 on socket 0
00:06:48.876  EAL: Detected lcore 54 as core 0 on socket 1
00:06:48.876  EAL: Detected lcore 55 as core 1 on socket 1
00:06:48.876  EAL: Detected lcore 56 as core 2 on socket 1
00:06:48.876  EAL: Detected lcore 57 as core 3 on socket 1
00:06:48.876  EAL: Detected lcore 58 as core 4 on socket 1
00:06:48.876  EAL: Detected lcore 59 as core 8 on socket 1
00:06:48.876  EAL: Detected lcore 60 as core 9 on socket 1
00:06:48.876  EAL: Detected lcore 61 as core 10 on socket 1
00:06:48.876  EAL: Detected lcore 62 as core 11 on socket 1
00:06:48.876  EAL: Detected lcore 63 as core 16 on socket 1
00:06:48.876  EAL: Detected lcore 64 as core 17 on socket 1
00:06:48.876  EAL: Detected lcore 65 as core 18 on socket 1
00:06:48.876  EAL: Detected lcore 66 as core 19 on socket 1
00:06:48.876  EAL: Detected lcore 67 as core 20 on socket 1
00:06:48.876  EAL: Detected lcore 68 as core 24 on socket 1
00:06:48.876  EAL: Detected lcore 69 as core 25 on socket 1
00:06:48.876  EAL: Detected lcore 70 as core 26 on socket 1
00:06:48.876  EAL: Detected lcore 71 as core 27 on socket 1
00:06:48.876  EAL: Maximum logical cores by configuration: 128
00:06:48.876  EAL: Detected CPU lcores: 72
00:06:48.876  EAL: Detected NUMA nodes: 2
00:06:48.876  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:06:48.876  EAL: Detected shared linkage of DPDK
00:06:48.876  EAL: No shared files mode enabled, IPC will be disabled
00:06:48.876  EAL: Bus pci wants IOVA as 'DC'
00:06:48.876  EAL: Buses did not request a specific IOVA mode.
00:06:48.876  EAL: IOMMU is available, selecting IOVA as VA mode.
00:06:48.876  EAL: Selected IOVA mode 'VA'
00:06:48.876  EAL: Probing VFIO support...
00:06:48.876  EAL: IOMMU type 1 (Type 1) is supported
00:06:48.876  EAL: IOMMU type 7 (sPAPR) is not supported
00:06:48.876  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:06:48.876  EAL: VFIO support initialized
00:06:48.876  EAL: Ask a virtual area of 0x2e000 bytes
00:06:48.876  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:48.876  EAL: Setting up physically contiguous memory...
00:06:48.876  EAL: Setting maximum number of open files to 524288
00:06:48.876  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:48.876  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:06:48.876  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:48.876  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:48.876  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.876  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:06:48.876  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:06:48.876  EAL: Ask a virtual area of 0x61000 bytes
00:06:48.876  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:06:48.876  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:06:48.877  EAL: Ask a virtual area of 0x400000000 bytes
00:06:48.877  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:06:48.877  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:06:48.877  EAL: Hugepages will be freed exactly as allocated.
00:06:48.877  EAL: No shared files mode enabled, IPC is disabled
00:06:48.877  EAL: No shared files mode enabled, IPC is disabled
00:06:48.877  EAL: TSC frequency is ~2300000 KHz
00:06:48.877  EAL: Main lcore 0 is ready (tid=7fe583518a40;cpuset=[0])
00:06:48.877  EAL: Trying to obtain current memory policy.
00:06:48.877  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:48.877  EAL: Restoring previous memory policy: 0
00:06:48.877  EAL: request: mp_malloc_sync
00:06:48.877  EAL: No shared files mode enabled, IPC is disabled
00:06:48.877  EAL: Heap on socket 0 was expanded by 2MB
00:06:48.877  EAL: No shared files mode enabled, IPC is disabled
00:06:48.877  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:48.877  EAL: Mem event callback 'spdk:(nil)' registered
00:06:48.877  
00:06:48.877  
00:06:48.877       CUnit - A unit testing framework for C - Version 2.1-3
00:06:48.877       http://cunit.sourceforge.net/
00:06:48.877  
00:06:48.877  
00:06:48.877  Suite: components_suite
00:06:49.444    Test: vtophys_malloc_test ...passed
00:06:49.444    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:49.444  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.444  EAL: Restoring previous memory policy: 4
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was expanded by 4MB
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was shrunk by 4MB
00:06:49.444  EAL: Trying to obtain current memory policy.
00:06:49.444  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.444  EAL: Restoring previous memory policy: 4
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was expanded by 6MB
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was shrunk by 6MB
00:06:49.444  EAL: Trying to obtain current memory policy.
00:06:49.444  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.444  EAL: Restoring previous memory policy: 4
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was expanded by 10MB
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was shrunk by 10MB
00:06:49.444  EAL: Trying to obtain current memory policy.
00:06:49.444  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.444  EAL: Restoring previous memory policy: 4
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was expanded by 18MB
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was shrunk by 18MB
00:06:49.444  EAL: Trying to obtain current memory policy.
00:06:49.444  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.444  EAL: Restoring previous memory policy: 4
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was expanded by 34MB
00:06:49.444  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.444  EAL: request: mp_malloc_sync
00:06:49.444  EAL: No shared files mode enabled, IPC is disabled
00:06:49.444  EAL: Heap on socket 0 was shrunk by 34MB
00:06:49.703  EAL: Trying to obtain current memory policy.
00:06:49.703  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.703  EAL: Restoring previous memory policy: 4
00:06:49.703  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.703  EAL: request: mp_malloc_sync
00:06:49.703  EAL: No shared files mode enabled, IPC is disabled
00:06:49.703  EAL: Heap on socket 0 was expanded by 66MB
00:06:49.703  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.703  EAL: request: mp_malloc_sync
00:06:49.703  EAL: No shared files mode enabled, IPC is disabled
00:06:49.703  EAL: Heap on socket 0 was shrunk by 66MB
00:06:49.703  EAL: Trying to obtain current memory policy.
00:06:49.703  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:49.961  EAL: Restoring previous memory policy: 4
00:06:49.961  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.961  EAL: request: mp_malloc_sync
00:06:49.961  EAL: No shared files mode enabled, IPC is disabled
00:06:49.961  EAL: Heap on socket 0 was expanded by 130MB
00:06:49.961  EAL: Calling mem event callback 'spdk:(nil)'
00:06:49.961  EAL: request: mp_malloc_sync
00:06:49.961  EAL: No shared files mode enabled, IPC is disabled
00:06:49.961  EAL: Heap on socket 0 was shrunk by 130MB
00:06:50.221  EAL: Trying to obtain current memory policy.
00:06:50.221  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:50.221  EAL: Restoring previous memory policy: 4
00:06:50.221  EAL: Calling mem event callback 'spdk:(nil)'
00:06:50.221  EAL: request: mp_malloc_sync
00:06:50.221  EAL: No shared files mode enabled, IPC is disabled
00:06:50.221  EAL: Heap on socket 0 was expanded by 258MB
00:06:50.787  EAL: Calling mem event callback 'spdk:(nil)'
00:06:50.787  EAL: request: mp_malloc_sync
00:06:50.787  EAL: No shared files mode enabled, IPC is disabled
00:06:50.787  EAL: Heap on socket 0 was shrunk by 258MB
00:06:51.354  EAL: Trying to obtain current memory policy.
00:06:51.354  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:51.354  EAL: Restoring previous memory policy: 4
00:06:51.354  EAL: Calling mem event callback 'spdk:(nil)'
00:06:51.354  EAL: request: mp_malloc_sync
00:06:51.354  EAL: No shared files mode enabled, IPC is disabled
00:06:51.354  EAL: Heap on socket 0 was expanded by 514MB
00:06:52.289  EAL: Calling mem event callback 'spdk:(nil)'
00:06:52.289  EAL: request: mp_malloc_sync
00:06:52.289  EAL: No shared files mode enabled, IPC is disabled
00:06:52.289  EAL: Heap on socket 0 was shrunk by 514MB
00:06:53.224  EAL: Trying to obtain current memory policy.
00:06:53.224  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:53.224  EAL: Restoring previous memory policy: 4
00:06:53.224  EAL: Calling mem event callback 'spdk:(nil)'
00:06:53.224  EAL: request: mp_malloc_sync
00:06:53.224  EAL: No shared files mode enabled, IPC is disabled
00:06:53.224  EAL: Heap on socket 0 was expanded by 1026MB
00:06:55.126  EAL: Calling mem event callback 'spdk:(nil)'
00:06:55.126  EAL: request: mp_malloc_sync
00:06:55.126  EAL: No shared files mode enabled, IPC is disabled
00:06:55.126  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:57.027  passed
00:06:57.027  
00:06:57.027  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:57.027                suites      1      1    n/a      0        0
00:06:57.027                 tests      2      2      2      0        0
00:06:57.027               asserts    497    497    497      0      n/a
00:06:57.027  
00:06:57.027  Elapsed time =    7.600 seconds
00:06:57.027  EAL: Calling mem event callback 'spdk:(nil)'
00:06:57.027  EAL: request: mp_malloc_sync
00:06:57.027  EAL: No shared files mode enabled, IPC is disabled
00:06:57.027  EAL: Heap on socket 0 was shrunk by 2MB
00:06:57.027  EAL: No shared files mode enabled, IPC is disabled
00:06:57.027  EAL: No shared files mode enabled, IPC is disabled
00:06:57.027  EAL: No shared files mode enabled, IPC is disabled
00:06:57.027  
00:06:57.027  real	0m7.871s
00:06:57.027  user	0m6.891s
00:06:57.027  sys	0m0.927s
00:06:57.027   10:34:46 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.027   10:34:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:06:57.027  ************************************
00:06:57.027  END TEST env_vtophys
00:06:57.027  ************************************
00:06:57.027   10:34:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/pci/pci_ut
00:06:57.027   10:34:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:57.027   10:34:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.027   10:34:46 env -- common/autotest_common.sh@10 -- # set +x
00:06:57.027  ************************************
00:06:57.027  START TEST env_pci
00:06:57.027  ************************************
00:06:57.027   10:34:46 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/pci/pci_ut
00:06:57.027  
00:06:57.027  
00:06:57.027       CUnit - A unit testing framework for C - Version 2.1-3
00:06:57.027       http://cunit.sourceforge.net/
00:06:57.027  
00:06:57.027  
00:06:57.027  Suite: pci
00:06:57.027    Test: pci_hook ...[2024-11-19 10:34:46.465135] /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1835713 has claimed it
00:06:57.027  EAL: Cannot find device (10000:00:01.0)
00:06:57.027  EAL: Failed to attach device on primary process
00:06:57.027  passed
00:06:57.027  
00:06:57.027  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:57.027                suites      1      1    n/a      0        0
00:06:57.027                 tests      1      1      1      0        0
00:06:57.027               asserts     25     25     25      0      n/a
00:06:57.027  
00:06:57.027  Elapsed time =    0.054 seconds
00:06:57.027  
00:06:57.027  real	0m0.148s
00:06:57.027  user	0m0.046s
00:06:57.027  sys	0m0.102s
00:06:57.027   10:34:46 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:57.027   10:34:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:06:57.027  ************************************
00:06:57.027  END TEST env_pci
00:06:57.027  ************************************
00:06:57.027   10:34:46 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:06:57.027    10:34:46 env -- env/env.sh@15 -- # uname
00:06:57.027   10:34:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:06:57.027   10:34:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:06:57.027   10:34:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:57.027   10:34:46 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:57.027   10:34:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:57.027   10:34:46 env -- common/autotest_common.sh@10 -- # set +x
00:06:57.027  ************************************
00:06:57.027  START TEST env_dpdk_post_init
00:06:57.027  ************************************
00:06:57.027   10:34:46 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:57.027  EAL: Detected CPU lcores: 72
00:06:57.027  EAL: Detected NUMA nodes: 2
00:06:57.027  EAL: Detected shared linkage of DPDK
00:06:57.027  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:57.027  EAL: Selected IOVA mode 'VA'
00:06:57.027  EAL: VFIO support initialized
00:06:57.027  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:57.285  EAL: Using IOMMU type 1 (Type 1)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0)
00:06:57.285  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0)
00:06:57.544  EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1)
00:06:57.544  EAL: Ignore mapping IO port bar(1)
00:06:57.544  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1)
00:06:57.803  EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1)
00:06:58.060  EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1)
00:06:58.060  EAL: Releasing PCI mapped resource for 0000:af:00.0
00:06:58.060  EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001044000
00:06:58.318  EAL: Releasing PCI mapped resource for 0000:b0:00.0
00:06:58.318  EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001048000
00:06:58.318  EAL: Releasing PCI mapped resource for 0000:5e:00.0
00:06:58.318  EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000
00:06:58.577  Starting DPDK initialization...
00:06:58.577  Starting SPDK post initialization...
00:06:58.577  SPDK NVMe probe
00:06:58.577  Attaching to 0000:5e:00.0
00:06:58.577  Attaching to 0000:af:00.0
00:06:58.577  Attaching to 0000:b0:00.0
00:06:58.577  Attached to 0000:af:00.0
00:06:58.577  Attached to 0000:b0:00.0
00:06:58.577  Attached to 0000:5e:00.0
00:06:58.577  Cleaning up...
00:06:58.577  
00:06:58.577  real	0m1.494s
00:06:58.577  user	0m0.213s
00:06:58.577  sys	0m0.412s
00:06:58.577   10:34:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.577   10:34:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:06:58.577  ************************************
00:06:58.577  END TEST env_dpdk_post_init
00:06:58.577  ************************************
00:06:58.577    10:34:48 env -- env/env.sh@26 -- # uname
00:06:58.577   10:34:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:06:58.577   10:34:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:06:58.577   10:34:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.577   10:34:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.577   10:34:48 env -- common/autotest_common.sh@10 -- # set +x
00:06:58.577  ************************************
00:06:58.577  START TEST env_mem_callbacks
00:06:58.577  ************************************
00:06:58.577   10:34:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:06:58.577  EAL: Detected CPU lcores: 72
00:06:58.577  EAL: Detected NUMA nodes: 2
00:06:58.577  EAL: Detected shared linkage of DPDK
00:06:58.577  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:58.577  EAL: Selected IOVA mode 'VA'
00:06:58.577  EAL: VFIO support initialized
00:06:58.577  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:58.577  
00:06:58.577  
00:06:58.577       CUnit - A unit testing framework for C - Version 2.1-3
00:06:58.577       http://cunit.sourceforge.net/
00:06:58.577  
00:06:58.577  
00:06:58.577  Suite: memory
00:06:58.577    Test: test ...
00:06:58.577  register 0x200000200000 2097152
00:06:58.577  malloc 3145728
00:06:58.577  register 0x200000400000 4194304
00:06:58.577  buf 0x2000004fffc0 len 3145728 PASSED
00:06:58.577  malloc 64
00:06:58.577  buf 0x2000004ffec0 len 64 PASSED
00:06:58.577  malloc 4194304
00:06:58.577  register 0x200000800000 6291456
00:06:58.577  buf 0x2000009fffc0 len 4194304 PASSED
00:06:58.577  free 0x2000004fffc0 3145728
00:06:58.577  free 0x2000004ffec0 64
00:06:58.577  unregister 0x200000400000 4194304 PASSED
00:06:58.577  free 0x2000009fffc0 4194304
00:06:58.577  unregister 0x200000800000 6291456 PASSED
00:06:58.577  malloc 8388608
00:06:58.836  register 0x200000400000 10485760
00:06:58.836  buf 0x2000005fffc0 len 8388608 PASSED
00:06:58.836  free 0x2000005fffc0 8388608
00:06:58.836  unregister 0x200000400000 10485760 PASSED
00:06:58.836  passed
00:06:58.836  
00:06:58.836  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:58.836                suites      1      1    n/a      0        0
00:06:58.836                 tests      1      1      1      0        0
00:06:58.836               asserts     15     15     15      0      n/a
00:06:58.836  
00:06:58.836  Elapsed time =    0.063 seconds
00:06:58.836  
00:06:58.836  real	0m0.196s
00:06:58.836  user	0m0.100s
00:06:58.836  sys	0m0.095s
00:06:58.836   10:34:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.836   10:34:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:06:58.837  ************************************
00:06:58.837  END TEST env_mem_callbacks
00:06:58.837  ************************************
00:06:58.837  
00:06:58.837  real	0m10.535s
00:06:58.837  user	0m7.711s
00:06:58.837  sys	0m1.950s
00:06:58.837   10:34:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.837   10:34:48 env -- common/autotest_common.sh@10 -- # set +x
00:06:58.837  ************************************
00:06:58.837  END TEST env
00:06:58.837  ************************************
00:06:58.837   10:34:48  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/rpc.sh
00:06:58.837   10:34:48  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.837   10:34:48  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.837   10:34:48  -- common/autotest_common.sh@10 -- # set +x
00:06:58.837  ************************************
00:06:58.837  START TEST rpc
00:06:58.837  ************************************
00:06:58.837   10:34:48 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/rpc.sh
00:06:58.837  * Looking for test storage...
00:06:59.096  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:59.096     10:34:48 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:59.096     10:34:48 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:59.096    10:34:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:59.096    10:34:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:59.096    10:34:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:59.096    10:34:48 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:59.096    10:34:48 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:59.096    10:34:48 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:59.096    10:34:48 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:59.096    10:34:48 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:59.096    10:34:48 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:59.096    10:34:48 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:59.096    10:34:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:59.096    10:34:48 rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:59.096    10:34:48 rpc -- scripts/common.sh@345 -- # : 1
00:06:59.096    10:34:48 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:59.096    10:34:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:59.096     10:34:48 rpc -- scripts/common.sh@365 -- # decimal 1
00:06:59.096     10:34:48 rpc -- scripts/common.sh@353 -- # local d=1
00:06:59.096     10:34:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:59.096     10:34:48 rpc -- scripts/common.sh@355 -- # echo 1
00:06:59.096    10:34:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:59.096     10:34:48 rpc -- scripts/common.sh@366 -- # decimal 2
00:06:59.096     10:34:48 rpc -- scripts/common.sh@353 -- # local d=2
00:06:59.096     10:34:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:59.096     10:34:48 rpc -- scripts/common.sh@355 -- # echo 2
00:06:59.096    10:34:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:59.096    10:34:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:59.096    10:34:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:59.096    10:34:48 rpc -- scripts/common.sh@368 -- # return 0
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:59.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:59.096  		--rc genhtml_branch_coverage=1
00:06:59.096  		--rc genhtml_function_coverage=1
00:06:59.096  		--rc genhtml_legend=1
00:06:59.096  		--rc geninfo_all_blocks=1
00:06:59.096  		--rc geninfo_unexecuted_blocks=1
00:06:59.096  		
00:06:59.096  		'
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:59.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:59.096  		--rc genhtml_branch_coverage=1
00:06:59.096  		--rc genhtml_function_coverage=1
00:06:59.096  		--rc genhtml_legend=1
00:06:59.096  		--rc geninfo_all_blocks=1
00:06:59.096  		--rc geninfo_unexecuted_blocks=1
00:06:59.096  		
00:06:59.096  		'
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:59.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:59.096  		--rc genhtml_branch_coverage=1
00:06:59.096  		--rc genhtml_function_coverage=1
00:06:59.096  		--rc genhtml_legend=1
00:06:59.096  		--rc geninfo_all_blocks=1
00:06:59.096  		--rc geninfo_unexecuted_blocks=1
00:06:59.096  		
00:06:59.096  		'
00:06:59.096    10:34:48 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:59.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:59.096  		--rc genhtml_branch_coverage=1
00:06:59.096  		--rc genhtml_function_coverage=1
00:06:59.096  		--rc genhtml_legend=1
00:06:59.096  		--rc geninfo_all_blocks=1
00:06:59.096  		--rc geninfo_unexecuted_blocks=1
00:06:59.096  		
00:06:59.096  		'
00:06:59.096   10:34:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1836205
00:06:59.096   10:34:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:59.096   10:34:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:06:59.096   10:34:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1836205
00:06:59.096   10:34:48 rpc -- common/autotest_common.sh@835 -- # '[' -z 1836205 ']'
00:06:59.096   10:34:48 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:59.096   10:34:48 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:59.096   10:34:48 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:59.096  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:59.096   10:34:48 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:59.096   10:34:48 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:59.096  [2024-11-19 10:34:48.823508] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:06:59.096  [2024-11-19 10:34:48.823609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836205 ]
00:06:59.355  [2024-11-19 10:34:48.962490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:59.355  [2024-11-19 10:34:49.063429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:59.355  [2024-11-19 10:34:49.063491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1836205' to capture a snapshot of events at runtime.
00:06:59.355  [2024-11-19 10:34:49.063506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:59.355  [2024-11-19 10:34:49.063517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:59.355  [2024-11-19 10:34:49.063534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1836205 for offline analysis/debug.
00:06:59.355  [2024-11-19 10:34:49.064822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:00.290   10:34:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:00.290   10:34:49 rpc -- common/autotest_common.sh@868 -- # return 0
00:07:00.290   10:34:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc
00:07:00.290   10:34:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc
00:07:00.290   10:34:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:07:00.290   10:34:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:07:00.290   10:34:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.290   10:34:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.290   10:34:49 rpc -- common/autotest_common.sh@10 -- # set +x
00:07:00.290  ************************************
00:07:00.290  START TEST rpc_integrity
00:07:00.290  ************************************
00:07:00.290   10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:07:00.290    10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:07:00.290    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.290    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.290    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.290   10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:07:00.290    10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:07:00.290   10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:07:00.290    10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:07:00.290    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.290    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.290    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.290   10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:07:00.290    10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:07:00.291    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.291    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.291    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.291   10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:07:00.291  {
00:07:00.291  "name": "Malloc0",
00:07:00.291  "aliases": [
00:07:00.291  "e7e235e3-3afb-4a9c-9853-36c4ef5507cb"
00:07:00.291  ],
00:07:00.291  "product_name": "Malloc disk",
00:07:00.291  "block_size": 512,
00:07:00.291  "num_blocks": 16384,
00:07:00.291  "uuid": "e7e235e3-3afb-4a9c-9853-36c4ef5507cb",
00:07:00.291  "assigned_rate_limits": {
00:07:00.291  "rw_ios_per_sec": 0,
00:07:00.291  "rw_mbytes_per_sec": 0,
00:07:00.291  "r_mbytes_per_sec": 0,
00:07:00.291  "w_mbytes_per_sec": 0
00:07:00.291  },
00:07:00.291  "claimed": false,
00:07:00.291  "zoned": false,
00:07:00.291  "supported_io_types": {
00:07:00.291  "read": true,
00:07:00.291  "write": true,
00:07:00.291  "unmap": true,
00:07:00.291  "flush": true,
00:07:00.291  "reset": true,
00:07:00.291  "nvme_admin": false,
00:07:00.291  "nvme_io": false,
00:07:00.291  "nvme_io_md": false,
00:07:00.291  "write_zeroes": true,
00:07:00.291  "zcopy": true,
00:07:00.291  "get_zone_info": false,
00:07:00.291  "zone_management": false,
00:07:00.291  "zone_append": false,
00:07:00.291  "compare": false,
00:07:00.291  "compare_and_write": false,
00:07:00.291  "abort": true,
00:07:00.291  "seek_hole": false,
00:07:00.291  "seek_data": false,
00:07:00.291  "copy": true,
00:07:00.291  "nvme_iov_md": false
00:07:00.291  },
00:07:00.291  "memory_domains": [
00:07:00.291  {
00:07:00.291  "dma_device_id": "system",
00:07:00.291  "dma_device_type": 1
00:07:00.291  },
00:07:00.291  {
00:07:00.291  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:00.291  "dma_device_type": 2
00:07:00.291  }
00:07:00.291  ],
00:07:00.291  "driver_specific": {}
00:07:00.291  }
00:07:00.291  ]'
00:07:00.291    10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:07:00.291   10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:07:00.291   10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:07:00.291   10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.291   10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.291  [2024-11-19 10:34:49.993335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:07:00.291  [2024-11-19 10:34:49.993394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:00.291  [2024-11-19 10:34:49.993422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001e080
00:07:00.291  [2024-11-19 10:34:49.993435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:00.291  [2024-11-19 10:34:49.995711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:00.291  [2024-11-19 10:34:49.995741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:07:00.291  Passthru0
00:07:00.291   10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.291    10:34:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:07:00.291    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.291    10:34:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.291    10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.291   10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:07:00.291  {
00:07:00.291  "name": "Malloc0",
00:07:00.291  "aliases": [
00:07:00.291  "e7e235e3-3afb-4a9c-9853-36c4ef5507cb"
00:07:00.291  ],
00:07:00.291  "product_name": "Malloc disk",
00:07:00.291  "block_size": 512,
00:07:00.291  "num_blocks": 16384,
00:07:00.291  "uuid": "e7e235e3-3afb-4a9c-9853-36c4ef5507cb",
00:07:00.291  "assigned_rate_limits": {
00:07:00.291  "rw_ios_per_sec": 0,
00:07:00.291  "rw_mbytes_per_sec": 0,
00:07:00.291  "r_mbytes_per_sec": 0,
00:07:00.291  "w_mbytes_per_sec": 0
00:07:00.291  },
00:07:00.291  "claimed": true,
00:07:00.291  "claim_type": "exclusive_write",
00:07:00.291  "zoned": false,
00:07:00.291  "supported_io_types": {
00:07:00.291  "read": true,
00:07:00.291  "write": true,
00:07:00.291  "unmap": true,
00:07:00.291  "flush": true,
00:07:00.291  "reset": true,
00:07:00.291  "nvme_admin": false,
00:07:00.291  "nvme_io": false,
00:07:00.291  "nvme_io_md": false,
00:07:00.291  "write_zeroes": true,
00:07:00.291  "zcopy": true,
00:07:00.291  "get_zone_info": false,
00:07:00.291  "zone_management": false,
00:07:00.291  "zone_append": false,
00:07:00.291  "compare": false,
00:07:00.291  "compare_and_write": false,
00:07:00.291  "abort": true,
00:07:00.291  "seek_hole": false,
00:07:00.291  "seek_data": false,
00:07:00.291  "copy": true,
00:07:00.291  "nvme_iov_md": false
00:07:00.291  },
00:07:00.291  "memory_domains": [
00:07:00.291  {
00:07:00.291  "dma_device_id": "system",
00:07:00.291  "dma_device_type": 1
00:07:00.291  },
00:07:00.291  {
00:07:00.291  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:00.291  "dma_device_type": 2
00:07:00.291  }
00:07:00.291  ],
00:07:00.291  "driver_specific": {}
00:07:00.291  },
00:07:00.291  {
00:07:00.291  "name": "Passthru0",
00:07:00.291  "aliases": [
00:07:00.291  "49a918f7-6b5f-50a8-a7c9-3c88a9d78392"
00:07:00.291  ],
00:07:00.291  "product_name": "passthru",
00:07:00.291  "block_size": 512,
00:07:00.291  "num_blocks": 16384,
00:07:00.291  "uuid": "49a918f7-6b5f-50a8-a7c9-3c88a9d78392",
00:07:00.291  "assigned_rate_limits": {
00:07:00.291  "rw_ios_per_sec": 0,
00:07:00.291  "rw_mbytes_per_sec": 0,
00:07:00.291  "r_mbytes_per_sec": 0,
00:07:00.291  "w_mbytes_per_sec": 0
00:07:00.291  },
00:07:00.291  "claimed": false,
00:07:00.291  "zoned": false,
00:07:00.291  "supported_io_types": {
00:07:00.291  "read": true,
00:07:00.291  "write": true,
00:07:00.291  "unmap": true,
00:07:00.291  "flush": true,
00:07:00.291  "reset": true,
00:07:00.291  "nvme_admin": false,
00:07:00.291  "nvme_io": false,
00:07:00.291  "nvme_io_md": false,
00:07:00.291  "write_zeroes": true,
00:07:00.291  "zcopy": true,
00:07:00.291  "get_zone_info": false,
00:07:00.291  "zone_management": false,
00:07:00.291  "zone_append": false,
00:07:00.291  "compare": false,
00:07:00.291  "compare_and_write": false,
00:07:00.291  "abort": true,
00:07:00.291  "seek_hole": false,
00:07:00.291  "seek_data": false,
00:07:00.291  "copy": true,
00:07:00.291  "nvme_iov_md": false
00:07:00.291  },
00:07:00.291  "memory_domains": [
00:07:00.291  {
00:07:00.291  "dma_device_id": "system",
00:07:00.291  "dma_device_type": 1
00:07:00.291  },
00:07:00.291  {
00:07:00.291  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:00.291  "dma_device_type": 2
00:07:00.291  }
00:07:00.291  ],
00:07:00.291  "driver_specific": {
00:07:00.291  "passthru": {
00:07:00.291  "name": "Passthru0",
00:07:00.291  "base_bdev_name": "Malloc0"
00:07:00.291  }
00:07:00.291  }
00:07:00.291  }
00:07:00.291  ]'
00:07:00.291    10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:07:00.291   10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:07:00.291   10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:07:00.291   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.291   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.291   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.291   10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:07:00.291   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.291   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.549   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.549    10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:07:00.549    10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.549    10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.550    10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.550   10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:07:00.550    10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:07:00.550   10:34:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:07:00.550  
00:07:00.550  real	0m0.314s
00:07:00.550  user	0m0.171s
00:07:00.550  sys	0m0.049s
00:07:00.550   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.550   10:34:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:00.550  ************************************
00:07:00.550  END TEST rpc_integrity
00:07:00.550  ************************************
00:07:00.550   10:34:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:07:00.550   10:34:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.550   10:34:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.550   10:34:50 rpc -- common/autotest_common.sh@10 -- # set +x
00:07:00.550  ************************************
00:07:00.550  START TEST rpc_plugins
00:07:00.550  ************************************
00:07:00.550   10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:07:00.550    10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.550   10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:07:00.550    10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.550   10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:07:00.550  {
00:07:00.550  "name": "Malloc1",
00:07:00.550  "aliases": [
00:07:00.550  "0bf38c27-4849-4048-884e-058ad6da030f"
00:07:00.550  ],
00:07:00.550  "product_name": "Malloc disk",
00:07:00.550  "block_size": 4096,
00:07:00.550  "num_blocks": 256,
00:07:00.550  "uuid": "0bf38c27-4849-4048-884e-058ad6da030f",
00:07:00.550  "assigned_rate_limits": {
00:07:00.550  "rw_ios_per_sec": 0,
00:07:00.550  "rw_mbytes_per_sec": 0,
00:07:00.550  "r_mbytes_per_sec": 0,
00:07:00.550  "w_mbytes_per_sec": 0
00:07:00.550  },
00:07:00.550  "claimed": false,
00:07:00.550  "zoned": false,
00:07:00.550  "supported_io_types": {
00:07:00.550  "read": true,
00:07:00.550  "write": true,
00:07:00.550  "unmap": true,
00:07:00.550  "flush": true,
00:07:00.550  "reset": true,
00:07:00.550  "nvme_admin": false,
00:07:00.550  "nvme_io": false,
00:07:00.550  "nvme_io_md": false,
00:07:00.550  "write_zeroes": true,
00:07:00.550  "zcopy": true,
00:07:00.550  "get_zone_info": false,
00:07:00.550  "zone_management": false,
00:07:00.550  "zone_append": false,
00:07:00.550  "compare": false,
00:07:00.550  "compare_and_write": false,
00:07:00.550  "abort": true,
00:07:00.550  "seek_hole": false,
00:07:00.550  "seek_data": false,
00:07:00.550  "copy": true,
00:07:00.550  "nvme_iov_md": false
00:07:00.550  },
00:07:00.550  "memory_domains": [
00:07:00.550  {
00:07:00.550  "dma_device_id": "system",
00:07:00.550  "dma_device_type": 1
00:07:00.550  },
00:07:00.550  {
00:07:00.550  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:00.550  "dma_device_type": 2
00:07:00.550  }
00:07:00.550  ],
00:07:00.550  "driver_specific": {}
00:07:00.550  }
00:07:00.550  ]'
00:07:00.550    10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:07:00.550   10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:07:00.550   10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:07:00.550   10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.550   10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:07:00.550   10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.550    10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.550    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:07:00.810    10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.810   10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:07:00.810    10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:07:00.810   10:34:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:07:00.810  
00:07:00.810  real	0m0.149s
00:07:00.810  user	0m0.080s
00:07:00.810  sys	0m0.033s
00:07:00.810   10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:00.810   10:34:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:07:00.810  ************************************
00:07:00.810  END TEST rpc_plugins
00:07:00.810  ************************************
00:07:00.810   10:34:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:07:00.810   10:34:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.810   10:34:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.810   10:34:50 rpc -- common/autotest_common.sh@10 -- # set +x
00:07:00.810  ************************************
00:07:00.810  START TEST rpc_trace_cmd_test
00:07:00.810  ************************************
00:07:00.810   10:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:07:00.810   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.810   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:07:00.810  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1836205",
00:07:00.810  "tpoint_group_mask": "0x8",
00:07:00.810  "iscsi_conn": {
00:07:00.810  "mask": "0x2",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "scsi": {
00:07:00.810  "mask": "0x4",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "bdev": {
00:07:00.810  "mask": "0x8",
00:07:00.810  "tpoint_mask": "0xffffffffffffffff"
00:07:00.810  },
00:07:00.810  "nvmf_rdma": {
00:07:00.810  "mask": "0x10",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "nvmf_tcp": {
00:07:00.810  "mask": "0x20",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "ftl": {
00:07:00.810  "mask": "0x40",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "blobfs": {
00:07:00.810  "mask": "0x80",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "dsa": {
00:07:00.810  "mask": "0x200",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "thread": {
00:07:00.810  "mask": "0x400",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "nvme_pcie": {
00:07:00.810  "mask": "0x800",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "iaa": {
00:07:00.810  "mask": "0x1000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "nvme_tcp": {
00:07:00.810  "mask": "0x2000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "bdev_nvme": {
00:07:00.810  "mask": "0x4000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "sock": {
00:07:00.810  "mask": "0x8000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "blob": {
00:07:00.810  "mask": "0x10000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "bdev_raid": {
00:07:00.810  "mask": "0x20000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  },
00:07:00.810  "scheduler": {
00:07:00.810  "mask": "0x40000",
00:07:00.810  "tpoint_mask": "0x0"
00:07:00.810  }
00:07:00.810  }'
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:07:00.810   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:07:00.810   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:07:00.810   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:07:00.810    10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:07:01.069   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:07:01.069    10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:07:01.069   10:34:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:07:01.069  
00:07:01.069  real	0m0.202s
00:07:01.069  user	0m0.171s
00:07:01.069  sys	0m0.025s
00:07:01.069   10:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.069   10:34:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:07:01.069  ************************************
00:07:01.069  END TEST rpc_trace_cmd_test
00:07:01.069  ************************************
00:07:01.069   10:34:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:07:01.069   10:34:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:07:01.069   10:34:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:07:01.069   10:34:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:01.069   10:34:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:01.069   10:34:50 rpc -- common/autotest_common.sh@10 -- # set +x
00:07:01.069  ************************************
00:07:01.069  START TEST rpc_daemon_integrity
00:07:01.069  ************************************
00:07:01.069   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.069   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:07:01.069   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.069   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.069   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:07:01.069  {
00:07:01.069  "name": "Malloc2",
00:07:01.069  "aliases": [
00:07:01.069  "c28eef32-4193-40fd-b2fe-12a21d8b6c51"
00:07:01.069  ],
00:07:01.069  "product_name": "Malloc disk",
00:07:01.069  "block_size": 512,
00:07:01.069  "num_blocks": 16384,
00:07:01.069  "uuid": "c28eef32-4193-40fd-b2fe-12a21d8b6c51",
00:07:01.069  "assigned_rate_limits": {
00:07:01.069  "rw_ios_per_sec": 0,
00:07:01.069  "rw_mbytes_per_sec": 0,
00:07:01.069  "r_mbytes_per_sec": 0,
00:07:01.069  "w_mbytes_per_sec": 0
00:07:01.069  },
00:07:01.069  "claimed": false,
00:07:01.069  "zoned": false,
00:07:01.069  "supported_io_types": {
00:07:01.069  "read": true,
00:07:01.069  "write": true,
00:07:01.069  "unmap": true,
00:07:01.069  "flush": true,
00:07:01.069  "reset": true,
00:07:01.069  "nvme_admin": false,
00:07:01.069  "nvme_io": false,
00:07:01.069  "nvme_io_md": false,
00:07:01.069  "write_zeroes": true,
00:07:01.069  "zcopy": true,
00:07:01.069  "get_zone_info": false,
00:07:01.069  "zone_management": false,
00:07:01.069  "zone_append": false,
00:07:01.069  "compare": false,
00:07:01.069  "compare_and_write": false,
00:07:01.069  "abort": true,
00:07:01.069  "seek_hole": false,
00:07:01.069  "seek_data": false,
00:07:01.069  "copy": true,
00:07:01.069  "nvme_iov_md": false
00:07:01.069  },
00:07:01.069  "memory_domains": [
00:07:01.069  {
00:07:01.069  "dma_device_id": "system",
00:07:01.069  "dma_device_type": 1
00:07:01.069  },
00:07:01.069  {
00:07:01.069  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:01.069  "dma_device_type": 2
00:07:01.069  }
00:07:01.069  ],
00:07:01.069  "driver_specific": {}
00:07:01.069  }
00:07:01.069  ]'
00:07:01.069    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:07:01.328   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:07:01.328   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:07:01.328   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.328   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.328  [2024-11-19 10:34:50.891876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:07:01.328  [2024-11-19 10:34:50.891926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:07:01.328  [2024-11-19 10:34:50.891951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001f280
00:07:01.328  [2024-11-19 10:34:50.891964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:07:01.328  [2024-11-19 10:34:50.894199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:07:01.328  [2024-11-19 10:34:50.894225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:07:01.328  Passthru0
00:07:01.328   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.328    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:07:01.328    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.328    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.328    10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.328   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:07:01.328  {
00:07:01.328  "name": "Malloc2",
00:07:01.328  "aliases": [
00:07:01.328  "c28eef32-4193-40fd-b2fe-12a21d8b6c51"
00:07:01.328  ],
00:07:01.328  "product_name": "Malloc disk",
00:07:01.328  "block_size": 512,
00:07:01.328  "num_blocks": 16384,
00:07:01.328  "uuid": "c28eef32-4193-40fd-b2fe-12a21d8b6c51",
00:07:01.328  "assigned_rate_limits": {
00:07:01.328  "rw_ios_per_sec": 0,
00:07:01.328  "rw_mbytes_per_sec": 0,
00:07:01.328  "r_mbytes_per_sec": 0,
00:07:01.328  "w_mbytes_per_sec": 0
00:07:01.328  },
00:07:01.328  "claimed": true,
00:07:01.328  "claim_type": "exclusive_write",
00:07:01.328  "zoned": false,
00:07:01.328  "supported_io_types": {
00:07:01.328  "read": true,
00:07:01.328  "write": true,
00:07:01.328  "unmap": true,
00:07:01.328  "flush": true,
00:07:01.328  "reset": true,
00:07:01.328  "nvme_admin": false,
00:07:01.328  "nvme_io": false,
00:07:01.328  "nvme_io_md": false,
00:07:01.328  "write_zeroes": true,
00:07:01.328  "zcopy": true,
00:07:01.328  "get_zone_info": false,
00:07:01.328  "zone_management": false,
00:07:01.328  "zone_append": false,
00:07:01.328  "compare": false,
00:07:01.328  "compare_and_write": false,
00:07:01.328  "abort": true,
00:07:01.328  "seek_hole": false,
00:07:01.328  "seek_data": false,
00:07:01.328  "copy": true,
00:07:01.328  "nvme_iov_md": false
00:07:01.328  },
00:07:01.328  "memory_domains": [
00:07:01.328  {
00:07:01.328  "dma_device_id": "system",
00:07:01.328  "dma_device_type": 1
00:07:01.328  },
00:07:01.328  {
00:07:01.328  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:01.329  "dma_device_type": 2
00:07:01.329  }
00:07:01.329  ],
00:07:01.329  "driver_specific": {}
00:07:01.329  },
00:07:01.329  {
00:07:01.329  "name": "Passthru0",
00:07:01.329  "aliases": [
00:07:01.329  "96a08766-9812-5ac0-99dd-363a5d5bb699"
00:07:01.329  ],
00:07:01.329  "product_name": "passthru",
00:07:01.329  "block_size": 512,
00:07:01.329  "num_blocks": 16384,
00:07:01.329  "uuid": "96a08766-9812-5ac0-99dd-363a5d5bb699",
00:07:01.329  "assigned_rate_limits": {
00:07:01.329  "rw_ios_per_sec": 0,
00:07:01.329  "rw_mbytes_per_sec": 0,
00:07:01.329  "r_mbytes_per_sec": 0,
00:07:01.329  "w_mbytes_per_sec": 0
00:07:01.329  },
00:07:01.329  "claimed": false,
00:07:01.329  "zoned": false,
00:07:01.329  "supported_io_types": {
00:07:01.329  "read": true,
00:07:01.329  "write": true,
00:07:01.329  "unmap": true,
00:07:01.329  "flush": true,
00:07:01.329  "reset": true,
00:07:01.329  "nvme_admin": false,
00:07:01.329  "nvme_io": false,
00:07:01.329  "nvme_io_md": false,
00:07:01.329  "write_zeroes": true,
00:07:01.329  "zcopy": true,
00:07:01.329  "get_zone_info": false,
00:07:01.329  "zone_management": false,
00:07:01.329  "zone_append": false,
00:07:01.329  "compare": false,
00:07:01.329  "compare_and_write": false,
00:07:01.329  "abort": true,
00:07:01.329  "seek_hole": false,
00:07:01.329  "seek_data": false,
00:07:01.329  "copy": true,
00:07:01.329  "nvme_iov_md": false
00:07:01.329  },
00:07:01.329  "memory_domains": [
00:07:01.329  {
00:07:01.329  "dma_device_id": "system",
00:07:01.329  "dma_device_type": 1
00:07:01.329  },
00:07:01.329  {
00:07:01.329  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:07:01.329  "dma_device_type": 2
00:07:01.329  }
00:07:01.329  ],
00:07:01.329  "driver_specific": {
00:07:01.329  "passthru": {
00:07:01.329  "name": "Passthru0",
00:07:01.329  "base_bdev_name": "Malloc2"
00:07:01.329  }
00:07:01.329  }
00:07:01.329  }
00:07:01.329  ]'
00:07:01.329    10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.329   10:34:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.329   10:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.329    10:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:07:01.329    10:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.329    10:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.329    10:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.329   10:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:07:01.329    10:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:07:01.329   10:34:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:07:01.329  
00:07:01.329  real	0m0.323s
00:07:01.329  user	0m0.176s
00:07:01.329  sys	0m0.059s
00:07:01.329   10:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:01.329   10:34:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:07:01.329  ************************************
00:07:01.329  END TEST rpc_daemon_integrity
00:07:01.329  ************************************
00:07:01.329   10:34:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:07:01.329   10:34:51 rpc -- rpc/rpc.sh@84 -- # killprocess 1836205
00:07:01.329   10:34:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 1836205 ']'
00:07:01.329   10:34:51 rpc -- common/autotest_common.sh@958 -- # kill -0 1836205
00:07:01.588    10:34:51 rpc -- common/autotest_common.sh@959 -- # uname
00:07:01.588   10:34:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:01.588    10:34:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836205
00:07:01.588   10:34:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:01.588   10:34:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:01.588   10:34:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836205'
00:07:01.588  killing process with pid 1836205
00:07:01.588   10:34:51 rpc -- common/autotest_common.sh@973 -- # kill 1836205
00:07:01.588   10:34:51 rpc -- common/autotest_common.sh@978 -- # wait 1836205
00:07:04.120  
00:07:04.120  real	0m4.912s
00:07:04.120  user	0m5.396s
00:07:04.120  sys	0m1.043s
00:07:04.120   10:34:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:04.120   10:34:53 rpc -- common/autotest_common.sh@10 -- # set +x
00:07:04.120  ************************************
00:07:04.120  END TEST rpc
00:07:04.120  ************************************
00:07:04.120   10:34:53  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:07:04.120   10:34:53  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.120   10:34:53  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.120   10:34:53  -- common/autotest_common.sh@10 -- # set +x
00:07:04.120  ************************************
00:07:04.120  START TEST skip_rpc
00:07:04.120  ************************************
00:07:04.120   10:34:53 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:07:04.120  * Looking for test storage...
00:07:04.120  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc
00:07:04.120    10:34:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:04.120     10:34:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:07:04.120     10:34:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:04.120    10:34:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:07:04.120    10:34:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@345 -- # : 1
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:04.121     10:34:53 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:04.121    10:34:53 skip_rpc -- scripts/common.sh@368 -- # return 0
00:07:04.121    10:34:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:04.121    10:34:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:04.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.121  		--rc genhtml_branch_coverage=1
00:07:04.121  		--rc genhtml_function_coverage=1
00:07:04.121  		--rc genhtml_legend=1
00:07:04.121  		--rc geninfo_all_blocks=1
00:07:04.121  		--rc geninfo_unexecuted_blocks=1
00:07:04.121  		
00:07:04.121  		'
00:07:04.121    10:34:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:04.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.121  		--rc genhtml_branch_coverage=1
00:07:04.121  		--rc genhtml_function_coverage=1
00:07:04.121  		--rc genhtml_legend=1
00:07:04.121  		--rc geninfo_all_blocks=1
00:07:04.121  		--rc geninfo_unexecuted_blocks=1
00:07:04.121  		
00:07:04.121  		'
00:07:04.121    10:34:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:04.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.121  		--rc genhtml_branch_coverage=1
00:07:04.121  		--rc genhtml_function_coverage=1
00:07:04.121  		--rc genhtml_legend=1
00:07:04.121  		--rc geninfo_all_blocks=1
00:07:04.121  		--rc geninfo_unexecuted_blocks=1
00:07:04.121  		
00:07:04.121  		'
00:07:04.121    10:34:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:04.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:04.121  		--rc genhtml_branch_coverage=1
00:07:04.121  		--rc genhtml_function_coverage=1
00:07:04.121  		--rc genhtml_legend=1
00:07:04.121  		--rc geninfo_all_blocks=1
00:07:04.121  		--rc geninfo_unexecuted_blocks=1
00:07:04.121  		
00:07:04.121  		'
00:07:04.121   10:34:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/config.json
00:07:04.121   10:34:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/log.txt
00:07:04.121   10:34:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:07:04.121   10:34:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.121   10:34:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.121   10:34:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:04.121  ************************************
00:07:04.121  START TEST skip_rpc
00:07:04.121  ************************************
00:07:04.121   10:34:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:07:04.121   10:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1836969
00:07:04.121   10:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:07:04.121   10:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:07:04.121   10:34:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:07:04.121  [2024-11-19 10:34:53.807479] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:04.121  [2024-11-19 10:34:53.807592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836969 ]
00:07:04.380  [2024-11-19 10:34:53.942348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:04.380  [2024-11-19 10:34:54.045734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:09.655    10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1836969
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1836969 ']'
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1836969
00:07:09.655    10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:09.655    10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836969
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836969'
00:07:09.655  killing process with pid 1836969
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1836969
00:07:09.655   10:34:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1836969
00:07:11.585  
00:07:11.585  real	0m7.354s
00:07:11.585  user	0m6.903s
00:07:11.585  sys	0m0.477s
00:07:11.585   10:35:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:11.585   10:35:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:11.585  ************************************
00:07:11.585  END TEST skip_rpc
00:07:11.585  ************************************
00:07:11.585   10:35:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:07:11.585   10:35:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:11.585   10:35:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:11.585   10:35:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:11.585  ************************************
00:07:11.585  START TEST skip_rpc_with_json
00:07:11.585  ************************************
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1838064
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1838064
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1838064 ']'
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:11.585  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:11.585   10:35:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:11.585  [2024-11-19 10:35:01.236698] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:11.585  [2024-11-19 10:35:01.236817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838064 ]
00:07:11.585  [2024-11-19 10:35:01.374632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:11.843  [2024-11-19 10:35:01.479325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:12.781  [2024-11-19 10:35:02.239441] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:07:12.781  request:
00:07:12.781  {
00:07:12.781  "trtype": "tcp",
00:07:12.781  "method": "nvmf_get_transports",
00:07:12.781  "req_id": 1
00:07:12.781  }
00:07:12.781  Got JSON-RPC error response
00:07:12.781  response:
00:07:12.781  {
00:07:12.781  "code": -19,
00:07:12.781  "message": "No such device"
00:07:12.781  }
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:12.781  [2024-11-19 10:35:02.247545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:12.781   10:35:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/config.json
00:07:12.781  {
00:07:12.781  "subsystems": [
00:07:12.781  {
00:07:12.781  "subsystem": "fsdev",
00:07:12.781  "config": [
00:07:12.781  {
00:07:12.781  "method": "fsdev_set_opts",
00:07:12.781  "params": {
00:07:12.781  "fsdev_io_pool_size": 65535,
00:07:12.781  "fsdev_io_cache_size": 256
00:07:12.781  }
00:07:12.781  }
00:07:12.781  ]
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "subsystem": "keyring",
00:07:12.781  "config": []
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "subsystem": "iobuf",
00:07:12.781  "config": [
00:07:12.781  {
00:07:12.781  "method": "iobuf_set_options",
00:07:12.781  "params": {
00:07:12.781  "small_pool_count": 8192,
00:07:12.781  "large_pool_count": 1024,
00:07:12.781  "small_bufsize": 8192,
00:07:12.781  "large_bufsize": 135168,
00:07:12.781  "enable_numa": false
00:07:12.781  }
00:07:12.781  }
00:07:12.781  ]
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "subsystem": "sock",
00:07:12.781  "config": [
00:07:12.781  {
00:07:12.781  "method": "sock_set_default_impl",
00:07:12.781  "params": {
00:07:12.781  "impl_name": "posix"
00:07:12.781  }
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "method": "sock_impl_set_options",
00:07:12.781  "params": {
00:07:12.781  "impl_name": "ssl",
00:07:12.781  "recv_buf_size": 4096,
00:07:12.781  "send_buf_size": 4096,
00:07:12.781  "enable_recv_pipe": true,
00:07:12.781  "enable_quickack": false,
00:07:12.781  "enable_placement_id": 0,
00:07:12.781  "enable_zerocopy_send_server": true,
00:07:12.781  "enable_zerocopy_send_client": false,
00:07:12.781  "zerocopy_threshold": 0,
00:07:12.781  "tls_version": 0,
00:07:12.781  "enable_ktls": false
00:07:12.781  }
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "method": "sock_impl_set_options",
00:07:12.781  "params": {
00:07:12.781  "impl_name": "posix",
00:07:12.781  "recv_buf_size": 2097152,
00:07:12.781  "send_buf_size": 2097152,
00:07:12.781  "enable_recv_pipe": true,
00:07:12.781  "enable_quickack": false,
00:07:12.781  "enable_placement_id": 0,
00:07:12.781  "enable_zerocopy_send_server": true,
00:07:12.781  "enable_zerocopy_send_client": false,
00:07:12.781  "zerocopy_threshold": 0,
00:07:12.781  "tls_version": 0,
00:07:12.781  "enable_ktls": false
00:07:12.781  }
00:07:12.781  }
00:07:12.781  ]
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "subsystem": "vmd",
00:07:12.781  "config": []
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "subsystem": "accel",
00:07:12.781  "config": [
00:07:12.781  {
00:07:12.781  "method": "accel_set_options",
00:07:12.781  "params": {
00:07:12.781  "small_cache_size": 128,
00:07:12.781  "large_cache_size": 16,
00:07:12.781  "task_count": 2048,
00:07:12.781  "sequence_count": 2048,
00:07:12.781  "buf_count": 2048
00:07:12.781  }
00:07:12.781  }
00:07:12.781  ]
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "subsystem": "bdev",
00:07:12.781  "config": [
00:07:12.781  {
00:07:12.781  "method": "bdev_set_options",
00:07:12.781  "params": {
00:07:12.781  "bdev_io_pool_size": 65535,
00:07:12.781  "bdev_io_cache_size": 256,
00:07:12.781  "bdev_auto_examine": true,
00:07:12.781  "iobuf_small_cache_size": 128,
00:07:12.781  "iobuf_large_cache_size": 16
00:07:12.781  }
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "method": "bdev_raid_set_options",
00:07:12.781  "params": {
00:07:12.781  "process_window_size_kb": 1024,
00:07:12.781  "process_max_bandwidth_mb_sec": 0
00:07:12.781  }
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "method": "bdev_iscsi_set_options",
00:07:12.781  "params": {
00:07:12.781  "timeout_sec": 30
00:07:12.781  }
00:07:12.781  },
00:07:12.781  {
00:07:12.781  "method": "bdev_nvme_set_options",
00:07:12.781  "params": {
00:07:12.781  "action_on_timeout": "none",
00:07:12.781  "timeout_us": 0,
00:07:12.781  "timeout_admin_us": 0,
00:07:12.781  "keep_alive_timeout_ms": 10000,
00:07:12.781  "arbitration_burst": 0,
00:07:12.781  "low_priority_weight": 0,
00:07:12.781  "medium_priority_weight": 0,
00:07:12.781  "high_priority_weight": 0,
00:07:12.782  "nvme_adminq_poll_period_us": 10000,
00:07:12.782  "nvme_ioq_poll_period_us": 0,
00:07:12.782  "io_queue_requests": 0,
00:07:12.782  "delay_cmd_submit": true,
00:07:12.782  "transport_retry_count": 4,
00:07:12.782  "bdev_retry_count": 3,
00:07:12.782  "transport_ack_timeout": 0,
00:07:12.782  "ctrlr_loss_timeout_sec": 0,
00:07:12.782  "reconnect_delay_sec": 0,
00:07:12.782  "fast_io_fail_timeout_sec": 0,
00:07:12.782  "disable_auto_failback": false,
00:07:12.782  "generate_uuids": false,
00:07:12.782  "transport_tos": 0,
00:07:12.782  "nvme_error_stat": false,
00:07:12.782  "rdma_srq_size": 0,
00:07:12.782  "io_path_stat": false,
00:07:12.782  "allow_accel_sequence": false,
00:07:12.782  "rdma_max_cq_size": 0,
00:07:12.782  "rdma_cm_event_timeout_ms": 0,
00:07:12.782  "dhchap_digests": [
00:07:12.782  "sha256",
00:07:12.782  "sha384",
00:07:12.782  "sha512"
00:07:12.782  ],
00:07:12.782  "dhchap_dhgroups": [
00:07:12.782  "null",
00:07:12.782  "ffdhe2048",
00:07:12.782  "ffdhe3072",
00:07:12.782  "ffdhe4096",
00:07:12.782  "ffdhe6144",
00:07:12.782  "ffdhe8192"
00:07:12.782  ]
00:07:12.782  }
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "method": "bdev_nvme_set_hotplug",
00:07:12.782  "params": {
00:07:12.782  "period_us": 100000,
00:07:12.782  "enable": false
00:07:12.782  }
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "method": "bdev_wait_for_examine"
00:07:12.782  }
00:07:12.782  ]
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "scsi",
00:07:12.782  "config": null
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "scheduler",
00:07:12.782  "config": [
00:07:12.782  {
00:07:12.782  "method": "framework_set_scheduler",
00:07:12.782  "params": {
00:07:12.782  "name": "static"
00:07:12.782  }
00:07:12.782  }
00:07:12.782  ]
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "vhost_scsi",
00:07:12.782  "config": []
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "vhost_blk",
00:07:12.782  "config": []
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "ublk",
00:07:12.782  "config": []
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "nbd",
00:07:12.782  "config": []
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "nvmf",
00:07:12.782  "config": [
00:07:12.782  {
00:07:12.782  "method": "nvmf_set_config",
00:07:12.782  "params": {
00:07:12.782  "discovery_filter": "match_any",
00:07:12.782  "admin_cmd_passthru": {
00:07:12.782  "identify_ctrlr": false
00:07:12.782  },
00:07:12.782  "dhchap_digests": [
00:07:12.782  "sha256",
00:07:12.782  "sha384",
00:07:12.782  "sha512"
00:07:12.782  ],
00:07:12.782  "dhchap_dhgroups": [
00:07:12.782  "null",
00:07:12.782  "ffdhe2048",
00:07:12.782  "ffdhe3072",
00:07:12.782  "ffdhe4096",
00:07:12.782  "ffdhe6144",
00:07:12.782  "ffdhe8192"
00:07:12.782  ]
00:07:12.782  }
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "method": "nvmf_set_max_subsystems",
00:07:12.782  "params": {
00:07:12.782  "max_subsystems": 1024
00:07:12.782  }
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "method": "nvmf_set_crdt",
00:07:12.782  "params": {
00:07:12.782  "crdt1": 0,
00:07:12.782  "crdt2": 0,
00:07:12.782  "crdt3": 0
00:07:12.782  }
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "method": "nvmf_create_transport",
00:07:12.782  "params": {
00:07:12.782  "trtype": "TCP",
00:07:12.782  "max_queue_depth": 128,
00:07:12.782  "max_io_qpairs_per_ctrlr": 127,
00:07:12.782  "in_capsule_data_size": 4096,
00:07:12.782  "max_io_size": 131072,
00:07:12.782  "io_unit_size": 131072,
00:07:12.782  "max_aq_depth": 128,
00:07:12.782  "num_shared_buffers": 511,
00:07:12.782  "buf_cache_size": 4294967295,
00:07:12.782  "dif_insert_or_strip": false,
00:07:12.782  "zcopy": false,
00:07:12.782  "c2h_success": true,
00:07:12.782  "sock_priority": 0,
00:07:12.782  "abort_timeout_sec": 1,
00:07:12.782  "ack_timeout": 0,
00:07:12.782  "data_wr_pool_size": 0
00:07:12.782  }
00:07:12.782  }
00:07:12.782  ]
00:07:12.782  },
00:07:12.782  {
00:07:12.782  "subsystem": "iscsi",
00:07:12.782  "config": [
00:07:12.782  {
00:07:12.782  "method": "iscsi_set_options",
00:07:12.782  "params": {
00:07:12.782  "node_base": "iqn.2016-06.io.spdk",
00:07:12.782  "max_sessions": 128,
00:07:12.782  "max_connections_per_session": 2,
00:07:12.782  "max_queue_depth": 64,
00:07:12.782  "default_time2wait": 2,
00:07:12.782  "default_time2retain": 20,
00:07:12.782  "first_burst_length": 8192,
00:07:12.782  "immediate_data": true,
00:07:12.782  "allow_duplicated_isid": false,
00:07:12.782  "error_recovery_level": 0,
00:07:12.782  "nop_timeout": 60,
00:07:12.782  "nop_in_interval": 30,
00:07:12.782  "disable_chap": false,
00:07:12.782  "require_chap": false,
00:07:12.782  "mutual_chap": false,
00:07:12.782  "chap_group": 0,
00:07:12.782  "max_large_datain_per_connection": 64,
00:07:12.782  "max_r2t_per_connection": 4,
00:07:12.782  "pdu_pool_size": 36864,
00:07:12.782  "immediate_data_pool_size": 16384,
00:07:12.782  "data_out_pool_size": 2048
00:07:12.782  }
00:07:12.782  }
00:07:12.782  ]
00:07:12.782  }
00:07:12.782  ]
00:07:12.782  }
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1838064
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1838064 ']'
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1838064
00:07:12.782    10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:12.782    10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838064
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838064'
00:07:12.782  killing process with pid 1838064
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1838064
00:07:12.782   10:35:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1838064
00:07:15.317   10:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1838501
00:07:15.317   10:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:07:15.317   10:35:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/config.json
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1838501
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1838501 ']'
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1838501
00:07:20.585    10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:20.585    10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838501
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838501'
00:07:20.585  killing process with pid 1838501
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1838501
00:07:20.585   10:35:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1838501
00:07:22.483   10:35:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/log.txt
00:07:22.483   10:35:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/log.txt
00:07:22.483  
00:07:22.483  real	0m10.916s
00:07:22.483  user	0m10.388s
00:07:22.483  sys	0m0.994s
00:07:22.483   10:35:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.483   10:35:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:07:22.483  ************************************
00:07:22.483  END TEST skip_rpc_with_json
00:07:22.483  ************************************
00:07:22.483   10:35:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:07:22.483   10:35:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.483   10:35:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.483   10:35:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:22.483  ************************************
00:07:22.483  START TEST skip_rpc_with_delay
00:07:22.483  ************************************
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:22.484    10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:22.484    10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:07:22.484  [2024-11-19 10:35:12.217319] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:22.484  
00:07:22.484  real	0m0.156s
00:07:22.484  user	0m0.071s
00:07:22.484  sys	0m0.084s
00:07:22.484   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.741   10:35:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:07:22.741  ************************************
00:07:22.741  END TEST skip_rpc_with_delay
00:07:22.741  ************************************
00:07:22.741    10:35:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:07:22.741   10:35:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:07:22.741   10:35:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:07:22.741   10:35:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.741   10:35:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.741   10:35:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:22.741  ************************************
00:07:22.741  START TEST exit_on_failed_rpc_init
00:07:22.741  ************************************
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1839591
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1839591
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1839591 ']'
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:22.741  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:07:22.741   10:35:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:07:22.741  [2024-11-19 10:35:12.450066] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:22.741  [2024-11-19 10:35:12.450172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839591 ]
00:07:22.998  [2024-11-19 10:35:12.586483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:22.998  [2024-11-19 10:35:12.691151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:23.934    10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:23.934    10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:07:23.934   10:35:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:07:23.934  [2024-11-19 10:35:13.553150] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:23.934  [2024-11-19 10:35:13.553250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839778 ]
00:07:23.934  [2024-11-19 10:35:13.687254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:24.193  [2024-11-19 10:35:13.796934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:24.193  [2024-11-19 10:35:13.797028] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:07:24.193  [2024-11-19 10:35:13.797051] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:07:24.193  [2024-11-19 10:35:13.797062] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1839591
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1839591 ']'
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1839591
00:07:24.452    10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:24.452    10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1839591
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1839591'
00:07:24.452  killing process with pid 1839591
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1839591
00:07:24.452   10:35:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1839591
00:07:26.984  
00:07:26.984  real	0m3.985s
00:07:26.984  user	0m4.239s
00:07:26.984  sys	0m0.736s
00:07:26.984   10:35:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.984   10:35:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:07:26.984  ************************************
00:07:26.984  END TEST exit_on_failed_rpc_init
00:07:26.984  ************************************
00:07:26.984   10:35:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc/config.json
00:07:26.984  
00:07:26.984  real	0m22.866s
00:07:26.984  user	0m21.805s
00:07:26.984  sys	0m2.585s
00:07:26.984   10:35:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.984   10:35:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:26.984  ************************************
00:07:26.984  END TEST skip_rpc
00:07:26.984  ************************************
00:07:26.984   10:35:16  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:07:26.984   10:35:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.984   10:35:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.984   10:35:16  -- common/autotest_common.sh@10 -- # set +x
00:07:26.984  ************************************
00:07:26.984  START TEST rpc_client
00:07:26.984  ************************************
00:07:26.984   10:35:16 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:07:26.984  * Looking for test storage...
00:07:26.984  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_client
00:07:26.984    10:35:16 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:26.984     10:35:16 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:07:26.984     10:35:16 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:26.984    10:35:16 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@345 -- # : 1
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@353 -- # local d=1
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@355 -- # echo 1
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@353 -- # local d=2
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:26.984     10:35:16 rpc_client -- scripts/common.sh@355 -- # echo 2
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:26.984    10:35:16 rpc_client -- scripts/common.sh@368 -- # return 0
00:07:26.984    10:35:16 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:26.984    10:35:16 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:26.984  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.984  		--rc genhtml_branch_coverage=1
00:07:26.984  		--rc genhtml_function_coverage=1
00:07:26.984  		--rc genhtml_legend=1
00:07:26.984  		--rc geninfo_all_blocks=1
00:07:26.984  		--rc geninfo_unexecuted_blocks=1
00:07:26.984  		
00:07:26.984  		'
00:07:26.984    10:35:16 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:26.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.985  		--rc genhtml_branch_coverage=1
00:07:26.985  		--rc genhtml_function_coverage=1
00:07:26.985  		--rc genhtml_legend=1
00:07:26.985  		--rc geninfo_all_blocks=1
00:07:26.985  		--rc geninfo_unexecuted_blocks=1
00:07:26.985  		
00:07:26.985  		'
00:07:26.985    10:35:16 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:26.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.985  		--rc genhtml_branch_coverage=1
00:07:26.985  		--rc genhtml_function_coverage=1
00:07:26.985  		--rc genhtml_legend=1
00:07:26.985  		--rc geninfo_all_blocks=1
00:07:26.985  		--rc geninfo_unexecuted_blocks=1
00:07:26.985  		
00:07:26.985  		'
00:07:26.985    10:35:16 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:26.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.985  		--rc genhtml_branch_coverage=1
00:07:26.985  		--rc genhtml_function_coverage=1
00:07:26.985  		--rc genhtml_legend=1
00:07:26.985  		--rc geninfo_all_blocks=1
00:07:26.985  		--rc geninfo_unexecuted_blocks=1
00:07:26.985  		
00:07:26.985  		'
00:07:26.985   10:35:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:07:26.985  OK
00:07:26.985   10:35:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:07:26.985  
00:07:26.985  real	0m0.244s
00:07:26.985  user	0m0.124s
00:07:26.985  sys	0m0.136s
00:07:26.985   10:35:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.985   10:35:16 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:07:26.985  ************************************
00:07:26.985  END TEST rpc_client
00:07:26.985  ************************************
00:07:26.985   10:35:16  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_config.sh
00:07:26.985   10:35:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.985   10:35:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.985   10:35:16  -- common/autotest_common.sh@10 -- # set +x
00:07:27.244  ************************************
00:07:27.244  START TEST json_config
00:07:27.244  ************************************
00:07:27.244   10:35:16 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_config.sh
00:07:27.244    10:35:16 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:27.244     10:35:16 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:07:27.244     10:35:16 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:27.244    10:35:16 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:27.244    10:35:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:27.244    10:35:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:27.244    10:35:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:27.244    10:35:16 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:07:27.244    10:35:16 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:07:27.244    10:35:16 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:07:27.244    10:35:16 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:07:27.244    10:35:16 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:07:27.244    10:35:16 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:07:27.244    10:35:16 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:07:27.244    10:35:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:27.244    10:35:16 json_config -- scripts/common.sh@344 -- # case "$op" in
00:07:27.244    10:35:16 json_config -- scripts/common.sh@345 -- # : 1
00:07:27.244    10:35:16 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:27.244    10:35:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:27.244     10:35:16 json_config -- scripts/common.sh@365 -- # decimal 1
00:07:27.244     10:35:16 json_config -- scripts/common.sh@353 -- # local d=1
00:07:27.244     10:35:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:27.244     10:35:16 json_config -- scripts/common.sh@355 -- # echo 1
00:07:27.244    10:35:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:07:27.244     10:35:16 json_config -- scripts/common.sh@366 -- # decimal 2
00:07:27.244     10:35:16 json_config -- scripts/common.sh@353 -- # local d=2
00:07:27.244     10:35:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:27.244     10:35:16 json_config -- scripts/common.sh@355 -- # echo 2
00:07:27.244    10:35:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:07:27.244    10:35:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:27.244    10:35:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:27.244    10:35:16 json_config -- scripts/common.sh@368 -- # return 0
00:07:27.244    10:35:16 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:27.244    10:35:16 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:27.244  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:27.244  		--rc genhtml_branch_coverage=1
00:07:27.244  		--rc genhtml_function_coverage=1
00:07:27.244  		--rc genhtml_legend=1
00:07:27.244  		--rc geninfo_all_blocks=1
00:07:27.244  		--rc geninfo_unexecuted_blocks=1
00:07:27.244  		
00:07:27.244  		'
00:07:27.244    10:35:16 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:27.244  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:27.244  		--rc genhtml_branch_coverage=1
00:07:27.244  		--rc genhtml_function_coverage=1
00:07:27.244  		--rc genhtml_legend=1
00:07:27.244  		--rc geninfo_all_blocks=1
00:07:27.244  		--rc geninfo_unexecuted_blocks=1
00:07:27.244  		
00:07:27.244  		'
00:07:27.244    10:35:16 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:27.244  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:27.244  		--rc genhtml_branch_coverage=1
00:07:27.244  		--rc genhtml_function_coverage=1
00:07:27.245  		--rc genhtml_legend=1
00:07:27.245  		--rc geninfo_all_blocks=1
00:07:27.245  		--rc geninfo_unexecuted_blocks=1
00:07:27.245  		
00:07:27.245  		'
00:07:27.245    10:35:16 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:27.245  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:27.245  		--rc genhtml_branch_coverage=1
00:07:27.245  		--rc genhtml_function_coverage=1
00:07:27.245  		--rc genhtml_legend=1
00:07:27.245  		--rc geninfo_all_blocks=1
00:07:27.245  		--rc geninfo_unexecuted_blocks=1
00:07:27.245  		
00:07:27.245  		'
00:07:27.245   10:35:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/nvmf/common.sh
00:07:27.245     10:35:16 json_config -- nvmf/common.sh@7 -- # uname -s
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:27.245     10:35:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:07:27.245     10:35:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:07:27.245     10:35:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:27.245     10:35:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:27.245     10:35:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:27.245      10:35:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:27.245      10:35:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:27.245      10:35:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:27.245      10:35:16 json_config -- paths/export.sh@5 -- # export PATH
00:07:27.245      10:35:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@51 -- # : 0
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:27.245  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:27.245    10:35:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:27.245   10:35:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/common.sh
00:07:27.245   10:35:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:07:27.245   10:35:16 json_config -- json_config/json_config.sh@15 -- # [[ 1 -ne 1 ]]
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_initiator_config.json')
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:07:27.246  INFO: JSON configuration test init
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:07:27.246   10:35:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:27.246   10:35:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:27.246   10:35:16 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:07:27.246   10:35:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:27.246   10:35:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:27.246   10:35:17 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:07:27.246   10:35:17 json_config -- json_config/common.sh@9 -- # local app=target
00:07:27.246   10:35:17 json_config -- json_config/common.sh@10 -- # shift
00:07:27.246   10:35:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:07:27.246   10:35:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:07:27.246   10:35:17 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:07:27.246   10:35:17 json_config -- json_config/common.sh@16 -- # [[ 1 -eq 1 ]]
00:07:27.246   10:35:17 json_config -- json_config/common.sh@18 -- # app_extra_params='-S /var/tmp'
00:07:27.246   10:35:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1840391
00:07:27.246   10:35:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:07:27.246  Waiting for target to run...
00:07:27.246   10:35:17 json_config -- json_config/common.sh@25 -- # waitforlisten 1840391 /var/tmp/spdk_tgt.sock
00:07:27.246   10:35:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 1840391 ']'
00:07:27.246   10:35:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:27.246   10:35:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -S /var/tmp -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:07:27.246   10:35:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:27.246   10:35:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:27.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:27.246   10:35:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:27.246   10:35:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:27.505  [2024-11-19 10:35:17.103349] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:27.505  [2024-11-19 10:35:17.103448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840391 ]
00:07:28.074  [2024-11-19 10:35:17.705853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:28.074  [2024-11-19 10:35:17.804206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.332   10:35:17 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:28.332   10:35:17 json_config -- common/autotest_common.sh@868 -- # return 0
00:07:28.332   10:35:17 json_config -- json_config/common.sh@26 -- # echo ''
00:07:28.332  
00:07:28.332   10:35:17 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:07:28.332   10:35:17 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:07:28.332   10:35:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:28.333   10:35:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:28.333   10:35:17 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:07:28.333   10:35:17 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:07:28.333   10:35:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:28.333   10:35:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:28.333   10:35:17 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:07:28.333   10:35:17 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:07:28.333   10:35:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:07:30.233   10:35:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:30.233   10:35:19 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:07:30.233    10:35:19 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:07:30.233    10:35:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:07:30.233    10:35:19 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@51 -- # local get_types
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:07:30.233    10:35:19 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:07:30.233    10:35:19 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:07:30.233    10:35:19 json_config -- json_config/json_config.sh@54 -- # sort
00:07:30.233    10:35:19 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:07:30.233   10:35:19 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:07:30.233   10:35:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:30.233   10:35:19 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:30.233   10:35:20 json_config -- json_config/json_config.sh@62 -- # return 0
00:07:30.233   10:35:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]]
00:07:30.233   10:35:20 json_config -- json_config/json_config.sh@289 -- # [[ 1 -eq 1 ]]
00:07:30.234   10:35:20 json_config -- json_config/json_config.sh@290 -- # create_vhost_subsystem_config
00:07:30.234   10:35:20 json_config -- json_config/json_config.sh@212 -- # timing_enter create_vhost_subsystem_config
00:07:30.234   10:35:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:30.234   10:35:20 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:30.234   10:35:20 json_config -- json_config/json_config.sh@214 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForVhost0
00:07:30.234   10:35:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForVhost0
00:07:30.491  MallocForVhost0
00:07:30.491   10:35:20 json_config -- json_config/json_config.sh@215 -- # tgt_rpc bdev_split_create MallocForVhost0 8
00:07:30.491   10:35:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create MallocForVhost0 8
00:07:30.749  MallocForVhost0p0 MallocForVhost0p1 MallocForVhost0p2 MallocForVhost0p3 MallocForVhost0p4 MallocForVhost0p5 MallocForVhost0p6 MallocForVhost0p7
00:07:30.749   10:35:20 json_config -- json_config/json_config.sh@217 -- # tgt_rpc vhost_create_scsi_controller VhostScsiCtrlr0
00:07:30.749   10:35:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock vhost_create_scsi_controller VhostScsiCtrlr0
00:07:31.008  VHOST_CONFIG: (/var/tmp/VhostScsiCtrlr0) vhost-user server: socket created, fd: 550
00:07:31.008  VHOST_CONFIG: (/var/tmp/VhostScsiCtrlr0) binding succeeded
00:07:31.008   10:35:20 json_config -- json_config/json_config.sh@218 -- # tgt_rpc vhost_scsi_controller_add_target VhostScsiCtrlr0 0 MallocForVhost0p3
00:07:31.008   10:35:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock vhost_scsi_controller_add_target VhostScsiCtrlr0 0 MallocForVhost0p3
00:07:31.266  0
00:07:31.266   10:35:20 json_config -- json_config/json_config.sh@219 -- # tgt_rpc vhost_scsi_controller_add_target VhostScsiCtrlr0 -1 MallocForVhost0p4
00:07:31.266   10:35:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock vhost_scsi_controller_add_target VhostScsiCtrlr0 -1 MallocForVhost0p4
00:07:31.266  1
00:07:31.266   10:35:21 json_config -- json_config/json_config.sh@220 -- # tgt_rpc vhost_controller_set_coalescing VhostScsiCtrlr0 1 100
00:07:31.266   10:35:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock vhost_controller_set_coalescing VhostScsiCtrlr0 1 100
00:07:31.524   10:35:21 json_config -- json_config/json_config.sh@222 -- # tgt_rpc vhost_create_blk_controller VhostBlkCtrlr0 MallocForVhost0p5
00:07:31.525   10:35:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock vhost_create_blk_controller VhostBlkCtrlr0 MallocForVhost0p5
00:07:31.783  VHOST_CONFIG: (/var/tmp/VhostBlkCtrlr0) vhost-user server: socket created, fd: 553
00:07:31.783  VHOST_CONFIG: (/var/tmp/VhostBlkCtrlr0) binding succeeded
00:07:31.783   10:35:21 json_config -- json_config/json_config.sh@224 -- # timing_exit create_vhost_subsystem_config
00:07:31.783   10:35:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:31.783   10:35:21 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:31.783   10:35:21 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:07:31.783   10:35:21 json_config -- json_config/json_config.sh@297 -- # [[ 0 -eq 1 ]]
00:07:31.783   10:35:21 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:07:31.783   10:35:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:31.783   10:35:21 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:31.783   10:35:21 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:07:31.783   10:35:21 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:07:31.783   10:35:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:07:32.041  MallocBdevForConfigChangeCheck
00:07:32.041   10:35:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:07:32.041   10:35:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:32.041   10:35:21 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:32.041   10:35:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:07:32.041   10:35:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:32.299   10:35:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:07:32.300  INFO: shutting down applications...
00:07:32.300   10:35:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:07:32.300   10:35:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:07:32.300   10:35:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:07:32.300   10:35:22 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:07:33.235  Calling clear_iscsi_subsystem
00:07:33.235  Calling clear_nvmf_subsystem
00:07:33.235  Calling clear_nbd_subsystem
00:07:33.235  Calling clear_ublk_subsystem
00:07:33.235  Calling clear_vhost_blk_subsystem
00:07:33.235  Calling clear_vhost_scsi_subsystem
00:07:33.235  Calling clear_bdev_subsystem
00:07:33.235   10:35:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py
00:07:33.235   10:35:22 json_config -- json_config/json_config.sh@350 -- # count=100
00:07:33.235   10:35:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:07:33.235   10:35:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:33.235   10:35:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:07:33.235   10:35:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty
00:07:33.494   10:35:23 json_config -- json_config/json_config.sh@352 -- # break
00:07:33.494   10:35:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:07:33.494   10:35:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:07:33.494   10:35:23 json_config -- json_config/common.sh@31 -- # local app=target
00:07:33.494   10:35:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:07:33.494   10:35:23 json_config -- json_config/common.sh@35 -- # [[ -n 1840391 ]]
00:07:33.494   10:35:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1840391
00:07:33.494   10:35:23 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:07:33.494   10:35:23 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:33.494   10:35:23 json_config -- json_config/common.sh@41 -- # kill -0 1840391
00:07:33.494   10:35:23 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:07:34.061   10:35:23 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:07:34.061   10:35:23 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:34.061   10:35:23 json_config -- json_config/common.sh@41 -- # kill -0 1840391
00:07:34.061   10:35:23 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:07:34.630   10:35:24 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:07:34.630   10:35:24 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:34.630   10:35:24 json_config -- json_config/common.sh@41 -- # kill -0 1840391
00:07:34.630   10:35:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:07:34.630   10:35:24 json_config -- json_config/common.sh@43 -- # break
00:07:34.630   10:35:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:07:34.630   10:35:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:07:34.630  SPDK target shutdown done
00:07:34.630   10:35:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:07:34.630  INFO: relaunching applications...
00:07:34.630   10:35:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:34.630   10:35:24 json_config -- json_config/common.sh@9 -- # local app=target
00:07:34.630   10:35:24 json_config -- json_config/common.sh@10 -- # shift
00:07:34.630   10:35:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:07:34.630   10:35:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:07:34.630   10:35:24 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:07:34.630   10:35:24 json_config -- json_config/common.sh@16 -- # [[ 1 -eq 1 ]]
00:07:34.630   10:35:24 json_config -- json_config/common.sh@18 -- # app_extra_params='-S /var/tmp'
00:07:34.630   10:35:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1841502
00:07:34.630   10:35:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:07:34.630  Waiting for target to run...
00:07:34.630   10:35:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -S /var/tmp -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:34.630   10:35:24 json_config -- json_config/common.sh@25 -- # waitforlisten 1841502 /var/tmp/spdk_tgt.sock
00:07:34.630   10:35:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 1841502 ']'
00:07:34.630   10:35:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:34.630   10:35:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:34.630   10:35:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:34.630  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:34.630   10:35:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:34.630   10:35:24 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:34.630  [2024-11-19 10:35:24.295060] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:34.630  [2024-11-19 10:35:24.295168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841502 ]
00:07:35.198  [2024-11-19 10:35:24.903285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:35.456  [2024-11-19 10:35:25.005198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:36.024  [2024-11-19 10:35:25.696959] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForVhost0
00:07:36.024  [2024-11-19 10:35:25.697019] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForVhost0
00:07:37.401  VHOST_CONFIG: (/var/tmp/VhostScsiCtrlr0) vhost-user server: socket created, fd: 553
00:07:37.401  VHOST_CONFIG: (/var/tmp/VhostScsiCtrlr0) binding succeeded
00:07:37.401  VHOST_CONFIG: (/var/tmp/VhostBlkCtrlr0) vhost-user server: socket created, fd: 556
00:07:37.401  VHOST_CONFIG: (/var/tmp/VhostBlkCtrlr0) binding succeeded
00:07:37.966   10:35:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:37.966   10:35:27 json_config -- common/autotest_common.sh@868 -- # return 0
00:07:37.966   10:35:27 json_config -- json_config/common.sh@26 -- # echo ''
00:07:37.966  
00:07:37.966   10:35:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:07:37.966   10:35:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:07:37.967  INFO: Checking if target configuration is the same...
00:07:37.967   10:35:27 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:37.967    10:35:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:07:37.967    10:35:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:37.967  + '[' 2 -ne 2 ']'
00:07:37.967  +++ dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_diff.sh
00:07:37.967  ++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/../..
00:07:37.967  + rootdir=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:07:37.967  +++ basename /dev/fd/62
00:07:37.967  ++ mktemp /tmp/62.XXX
00:07:37.967  + tmp_file_1=/tmp/62.S5b
00:07:37.967  +++ basename /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:37.967  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:07:37.967  + tmp_file_2=/tmp/spdk_tgt_config.json.k8a
00:07:37.967  + ret=0
00:07:37.967  + /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:07:38.225  + /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:07:38.225  + diff -u /tmp/62.S5b /tmp/spdk_tgt_config.json.k8a
00:07:38.225  + echo 'INFO: JSON config files are the same'
00:07:38.225  INFO: JSON config files are the same
00:07:38.225  + rm /tmp/62.S5b /tmp/spdk_tgt_config.json.k8a
00:07:38.225  + exit 0
00:07:38.225   10:35:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:07:38.225   10:35:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:07:38.225  INFO: changing configuration and checking if this can be detected...
00:07:38.225   10:35:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:07:38.225   10:35:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:07:38.484    10:35:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:07:38.484    10:35:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:07:38.484   10:35:28 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:38.484  + '[' 2 -ne 2 ']'
00:07:38.484  +++ dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_diff.sh
00:07:38.484  ++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/../..
00:07:38.484  + rootdir=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:07:38.484  +++ basename /dev/fd/62
00:07:38.484  ++ mktemp /tmp/62.XXX
00:07:38.484  + tmp_file_1=/tmp/62.7Ij
00:07:38.484  +++ basename /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:38.484  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:07:38.484  + tmp_file_2=/tmp/spdk_tgt_config.json.u3f
00:07:38.484  + ret=0
00:07:38.484  + /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:07:38.743  + /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:07:38.743  + diff -u /tmp/62.7Ij /tmp/spdk_tgt_config.json.u3f
00:07:38.743  + ret=1
00:07:38.743  + echo '=== Start of file: /tmp/62.7Ij ==='
00:07:38.743  + cat /tmp/62.7Ij
00:07:38.743  + echo '=== End of file: /tmp/62.7Ij ==='
00:07:38.743  + echo ''
00:07:38.743  + echo '=== Start of file: /tmp/spdk_tgt_config.json.u3f ==='
00:07:38.743  + cat /tmp/spdk_tgt_config.json.u3f
00:07:38.743  + echo '=== End of file: /tmp/spdk_tgt_config.json.u3f ==='
00:07:38.743  + echo ''
00:07:38.743  + rm /tmp/62.7Ij /tmp/spdk_tgt_config.json.u3f
00:07:38.743  + exit 1
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:07:38.743  INFO: configuration change detected.
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:07:38.743   10:35:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:38.743   10:35:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 1841502 ]]
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:07:38.743   10:35:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:38.743   10:35:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]]
00:07:38.743    10:35:28 json_config -- json_config/json_config.sh@200 -- # uname -s
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:07:38.743   10:35:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:07:38.743   10:35:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:38.743   10:35:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:39.002   10:35:28 json_config -- json_config/json_config.sh@330 -- # killprocess 1841502
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@954 -- # '[' -z 1841502 ']'
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@958 -- # kill -0 1841502
00:07:39.002    10:35:28 json_config -- common/autotest_common.sh@959 -- # uname
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:39.002    10:35:28 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1841502
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1841502'
00:07:39.002  killing process with pid 1841502
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@973 -- # kill 1841502
00:07:39.002   10:35:28 json_config -- common/autotest_common.sh@978 -- # wait 1841502
00:07:40.376   10:35:29 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/vhost-phy-autotest/spdk/spdk_tgt_config.json
00:07:40.376   10:35:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:07:40.376   10:35:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:40.376   10:35:29 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:40.376   10:35:30 json_config -- json_config/json_config.sh@335 -- # return 0
00:07:40.376   10:35:30 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:07:40.376  INFO: Success
00:07:40.376  
00:07:40.376  real	0m13.221s
00:07:40.376  user	0m13.325s
00:07:40.376  sys	0m3.581s
00:07:40.376   10:35:30 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:40.376   10:35:30 json_config -- common/autotest_common.sh@10 -- # set +x
00:07:40.377  ************************************
00:07:40.377  END TEST json_config
00:07:40.377  ************************************
00:07:40.377   10:35:30  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:07:40.377   10:35:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:40.377   10:35:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:40.377   10:35:30  -- common/autotest_common.sh@10 -- # set +x
00:07:40.377  ************************************
00:07:40.377  START TEST json_config_extra_key
00:07:40.377  ************************************
00:07:40.377   10:35:30 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:07:40.377    10:35:30 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:40.377     10:35:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:07:40.377     10:35:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:40.639    10:35:30 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:40.639     10:35:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:40.639    10:35:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:07:40.639    10:35:30 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:40.639    10:35:30 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:40.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.639  		--rc genhtml_branch_coverage=1
00:07:40.639  		--rc genhtml_function_coverage=1
00:07:40.639  		--rc genhtml_legend=1
00:07:40.639  		--rc geninfo_all_blocks=1
00:07:40.639  		--rc geninfo_unexecuted_blocks=1
00:07:40.639  		
00:07:40.639  		'
00:07:40.639    10:35:30 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:40.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.639  		--rc genhtml_branch_coverage=1
00:07:40.639  		--rc genhtml_function_coverage=1
00:07:40.639  		--rc genhtml_legend=1
00:07:40.639  		--rc geninfo_all_blocks=1
00:07:40.639  		--rc geninfo_unexecuted_blocks=1
00:07:40.639  		
00:07:40.639  		'
00:07:40.639    10:35:30 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:40.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.639  		--rc genhtml_branch_coverage=1
00:07:40.639  		--rc genhtml_function_coverage=1
00:07:40.639  		--rc genhtml_legend=1
00:07:40.639  		--rc geninfo_all_blocks=1
00:07:40.639  		--rc geninfo_unexecuted_blocks=1
00:07:40.639  		
00:07:40.639  		'
00:07:40.639    10:35:30 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:40.639  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:40.639  		--rc genhtml_branch_coverage=1
00:07:40.639  		--rc genhtml_function_coverage=1
00:07:40.639  		--rc genhtml_legend=1
00:07:40.639  		--rc geninfo_all_blocks=1
00:07:40.639  		--rc geninfo_unexecuted_blocks=1
00:07:40.639  		
00:07:40.639  		'
00:07:40.639   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/nvmf/common.sh
00:07:40.639     10:35:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:07:40.639    10:35:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:40.640     10:35:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:07:40.640     10:35:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:07:40.640     10:35:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:40.640     10:35:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:40.640     10:35:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:40.640      10:35:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:40.640      10:35:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:40.640      10:35:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:40.640      10:35:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:07:40.640      10:35:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:40.640  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:40.640    10:35:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/common.sh
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/extra_key.json')
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:07:40.640  INFO: launching applications...
00:07:40.640   10:35:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/extra_key.json
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 1 -eq 1 ]]
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@18 -- # app_extra_params='-S /var/tmp'
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1842370
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:07:40.640  Waiting for target to run...
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1842370 /var/tmp/spdk_tgt.sock
00:07:40.640   10:35:30 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1842370 ']'
00:07:40.640   10:35:30 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:07:40.640   10:35:30 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -S /var/tmp -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/extra_key.json
00:07:40.640   10:35:30 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:40.640   10:35:30 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:07:40.640  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:07:40.640   10:35:30 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:40.640   10:35:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:40.640  [2024-11-19 10:35:30.389461] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:40.640  [2024-11-19 10:35:30.389564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842370 ]
00:07:41.237  [2024-11-19 10:35:30.904108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:41.237  [2024-11-19 10:35:31.012733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:42.172   10:35:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:42.172   10:35:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:07:42.172  
00:07:42.172   10:35:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:07:42.172  INFO: shutting down applications...
00:07:42.172   10:35:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1842370 ]]
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1842370
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:42.172   10:35:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:42.431   10:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:42.431   10:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:42.431   10:35:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:42.431   10:35:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:42.998   10:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:42.998   10:35:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:42.998   10:35:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:42.998   10:35:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:43.564   10:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:43.564   10:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:43.564   10:35:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:43.564   10:35:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:44.131   10:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:44.131   10:35:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:44.131   10:35:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:44.131   10:35:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:44.390   10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:44.390   10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:44.390   10:35:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:44.390   10:35:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1842370
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@43 -- # break
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:07:44.958   10:35:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:07:44.958  SPDK target shutdown done
00:07:44.958   10:35:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:07:44.958  Success
00:07:44.958  
00:07:44.958  real	0m4.551s
00:07:44.958  user	0m3.709s
00:07:44.958  sys	0m0.811s
00:07:44.958   10:35:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:44.958   10:35:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:07:44.958  ************************************
00:07:44.958  END TEST json_config_extra_key
00:07:44.958  ************************************
00:07:44.958   10:35:34  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:44.958   10:35:34  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:44.958   10:35:34  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:44.958   10:35:34  -- common/autotest_common.sh@10 -- # set +x
00:07:44.958  ************************************
00:07:44.958  START TEST alias_rpc
00:07:44.958  ************************************
00:07:44.958   10:35:34 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:07:45.218  * Looking for test storage...
00:07:45.218  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/alias_rpc
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:45.218     10:35:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:07:45.218     10:35:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@345 -- # : 1
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:45.218     10:35:34 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:45.218    10:35:34 alias_rpc -- scripts/common.sh@368 -- # return 0
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:45.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:45.218  		--rc genhtml_branch_coverage=1
00:07:45.218  		--rc genhtml_function_coverage=1
00:07:45.218  		--rc genhtml_legend=1
00:07:45.218  		--rc geninfo_all_blocks=1
00:07:45.218  		--rc geninfo_unexecuted_blocks=1
00:07:45.218  		
00:07:45.218  		'
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:45.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:45.218  		--rc genhtml_branch_coverage=1
00:07:45.218  		--rc genhtml_function_coverage=1
00:07:45.218  		--rc genhtml_legend=1
00:07:45.218  		--rc geninfo_all_blocks=1
00:07:45.218  		--rc geninfo_unexecuted_blocks=1
00:07:45.218  		
00:07:45.218  		'
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:45.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:45.218  		--rc genhtml_branch_coverage=1
00:07:45.218  		--rc genhtml_function_coverage=1
00:07:45.218  		--rc genhtml_legend=1
00:07:45.218  		--rc geninfo_all_blocks=1
00:07:45.218  		--rc geninfo_unexecuted_blocks=1
00:07:45.218  		
00:07:45.218  		'
00:07:45.218    10:35:34 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:45.218  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:45.218  		--rc genhtml_branch_coverage=1
00:07:45.218  		--rc genhtml_function_coverage=1
00:07:45.218  		--rc genhtml_legend=1
00:07:45.218  		--rc geninfo_all_blocks=1
00:07:45.218  		--rc geninfo_unexecuted_blocks=1
00:07:45.218  		
00:07:45.218  		'
00:07:45.218   10:35:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:45.218   10:35:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1843149
00:07:45.218   10:35:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1843149
00:07:45.218   10:35:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1843149 ']'
00:07:45.218   10:35:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:45.218   10:35:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:45.218   10:35:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:45.218  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:45.218   10:35:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:45.218   10:35:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:45.218   10:35:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:45.218  [2024-11-19 10:35:34.985408] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:45.218  [2024-11-19 10:35:34.985538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843149 ]
00:07:45.477  [2024-11-19 10:35:35.117291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:45.477  [2024-11-19 10:35:35.215866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:46.412   10:35:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:46.412   10:35:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:46.412   10:35:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py load_config -i
00:07:46.412   10:35:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1843149
00:07:46.412   10:35:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1843149 ']'
00:07:46.412   10:35:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1843149
00:07:46.412    10:35:36 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:07:46.412   10:35:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:46.412    10:35:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843149
00:07:46.671   10:35:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:46.671   10:35:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:46.671   10:35:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843149'
00:07:46.671  killing process with pid 1843149
00:07:46.671   10:35:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 1843149
00:07:46.671   10:35:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 1843149
00:07:49.202  
00:07:49.202  real	0m3.771s
00:07:49.202  user	0m3.737s
00:07:49.202  sys	0m0.639s
00:07:49.202   10:35:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.202   10:35:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:49.202  ************************************
00:07:49.202  END TEST alias_rpc
00:07:49.202  ************************************
00:07:49.202   10:35:38  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:07:49.202   10:35:38  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vhost-phy-autotest/spdk/test/spdkcli/tcp.sh
00:07:49.202   10:35:38  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:49.202   10:35:38  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.202   10:35:38  -- common/autotest_common.sh@10 -- # set +x
00:07:49.202  ************************************
00:07:49.202  START TEST spdkcli_tcp
00:07:49.202  ************************************
00:07:49.202   10:35:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/spdkcli/tcp.sh
00:07:49.202  * Looking for test storage...
00:07:49.202  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/spdkcli
00:07:49.202    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:49.202     10:35:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:07:49.202     10:35:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:49.202    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:07:49.202    10:35:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:07:49.202     10:35:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:49.203     10:35:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:07:49.203    10:35:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:07:49.203    10:35:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:49.203    10:35:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:49.203    10:35:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:07:49.203    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:49.203    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:49.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.203  		--rc genhtml_branch_coverage=1
00:07:49.203  		--rc genhtml_function_coverage=1
00:07:49.203  		--rc genhtml_legend=1
00:07:49.203  		--rc geninfo_all_blocks=1
00:07:49.203  		--rc geninfo_unexecuted_blocks=1
00:07:49.203  		
00:07:49.203  		'
00:07:49.203    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:49.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.203  		--rc genhtml_branch_coverage=1
00:07:49.203  		--rc genhtml_function_coverage=1
00:07:49.203  		--rc genhtml_legend=1
00:07:49.203  		--rc geninfo_all_blocks=1
00:07:49.203  		--rc geninfo_unexecuted_blocks=1
00:07:49.203  		
00:07:49.203  		'
00:07:49.203    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:49.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.203  		--rc genhtml_branch_coverage=1
00:07:49.203  		--rc genhtml_function_coverage=1
00:07:49.203  		--rc genhtml_legend=1
00:07:49.203  		--rc geninfo_all_blocks=1
00:07:49.203  		--rc geninfo_unexecuted_blocks=1
00:07:49.203  		
00:07:49.203  		'
00:07:49.203    10:35:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:49.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.203  		--rc genhtml_branch_coverage=1
00:07:49.203  		--rc genhtml_function_coverage=1
00:07:49.203  		--rc genhtml_legend=1
00:07:49.203  		--rc geninfo_all_blocks=1
00:07:49.203  		--rc geninfo_unexecuted_blocks=1
00:07:49.203  		
00:07:49.203  		'
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/spdkcli/common.sh
00:07:49.203    10:35:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:07:49.203    10:35:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/json_config/clear_config.py
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1843699
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1843699
00:07:49.203   10:35:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1843699 ']'
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:49.203  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:49.203   10:35:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:49.203  [2024-11-19 10:35:38.867574] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:49.203  [2024-11-19 10:35:38.867672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843699 ]
00:07:49.462  [2024-11-19 10:35:39.002326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:49.462  [2024-11-19 10:35:39.107605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:49.462  [2024-11-19 10:35:39.107616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:50.396   10:35:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:50.396   10:35:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:07:50.396   10:35:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1843782
00:07:50.396   10:35:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:07:50.396   10:35:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:07:50.396  [
00:07:50.397    "bdev_malloc_delete",
00:07:50.397    "bdev_malloc_create",
00:07:50.397    "bdev_null_resize",
00:07:50.397    "bdev_null_delete",
00:07:50.397    "bdev_null_create",
00:07:50.397    "bdev_nvme_cuse_unregister",
00:07:50.397    "bdev_nvme_cuse_register",
00:07:50.397    "bdev_opal_new_user",
00:07:50.397    "bdev_opal_set_lock_state",
00:07:50.397    "bdev_opal_delete",
00:07:50.397    "bdev_opal_get_info",
00:07:50.397    "bdev_opal_create",
00:07:50.397    "bdev_nvme_opal_revert",
00:07:50.397    "bdev_nvme_opal_init",
00:07:50.397    "bdev_nvme_send_cmd",
00:07:50.397    "bdev_nvme_set_keys",
00:07:50.397    "bdev_nvme_get_path_iostat",
00:07:50.397    "bdev_nvme_get_mdns_discovery_info",
00:07:50.397    "bdev_nvme_stop_mdns_discovery",
00:07:50.397    "bdev_nvme_start_mdns_discovery",
00:07:50.397    "bdev_nvme_set_multipath_policy",
00:07:50.397    "bdev_nvme_set_preferred_path",
00:07:50.397    "bdev_nvme_get_io_paths",
00:07:50.397    "bdev_nvme_remove_error_injection",
00:07:50.397    "bdev_nvme_add_error_injection",
00:07:50.397    "bdev_nvme_get_discovery_info",
00:07:50.397    "bdev_nvme_stop_discovery",
00:07:50.397    "bdev_nvme_start_discovery",
00:07:50.397    "bdev_nvme_get_controller_health_info",
00:07:50.397    "bdev_nvme_disable_controller",
00:07:50.397    "bdev_nvme_enable_controller",
00:07:50.397    "bdev_nvme_reset_controller",
00:07:50.397    "bdev_nvme_get_transport_statistics",
00:07:50.397    "bdev_nvme_apply_firmware",
00:07:50.397    "bdev_nvme_detach_controller",
00:07:50.397    "bdev_nvme_get_controllers",
00:07:50.397    "bdev_nvme_attach_controller",
00:07:50.397    "bdev_nvme_set_hotplug",
00:07:50.397    "bdev_nvme_set_options",
00:07:50.397    "bdev_passthru_delete",
00:07:50.397    "bdev_passthru_create",
00:07:50.397    "bdev_lvol_set_parent_bdev",
00:07:50.397    "bdev_lvol_set_parent",
00:07:50.397    "bdev_lvol_check_shallow_copy",
00:07:50.397    "bdev_lvol_start_shallow_copy",
00:07:50.397    "bdev_lvol_grow_lvstore",
00:07:50.397    "bdev_lvol_get_lvols",
00:07:50.397    "bdev_lvol_get_lvstores",
00:07:50.397    "bdev_lvol_delete",
00:07:50.397    "bdev_lvol_set_read_only",
00:07:50.397    "bdev_lvol_resize",
00:07:50.397    "bdev_lvol_decouple_parent",
00:07:50.397    "bdev_lvol_inflate",
00:07:50.397    "bdev_lvol_rename",
00:07:50.397    "bdev_lvol_clone_bdev",
00:07:50.397    "bdev_lvol_clone",
00:07:50.397    "bdev_lvol_snapshot",
00:07:50.397    "bdev_lvol_create",
00:07:50.397    "bdev_lvol_delete_lvstore",
00:07:50.397    "bdev_lvol_rename_lvstore",
00:07:50.397    "bdev_lvol_create_lvstore",
00:07:50.397    "bdev_raid_set_options",
00:07:50.397    "bdev_raid_remove_base_bdev",
00:07:50.397    "bdev_raid_add_base_bdev",
00:07:50.397    "bdev_raid_delete",
00:07:50.397    "bdev_raid_create",
00:07:50.397    "bdev_raid_get_bdevs",
00:07:50.397    "bdev_error_inject_error",
00:07:50.397    "bdev_error_delete",
00:07:50.397    "bdev_error_create",
00:07:50.397    "bdev_split_delete",
00:07:50.397    "bdev_split_create",
00:07:50.397    "bdev_delay_delete",
00:07:50.397    "bdev_delay_create",
00:07:50.397    "bdev_delay_update_latency",
00:07:50.397    "bdev_zone_block_delete",
00:07:50.397    "bdev_zone_block_create",
00:07:50.397    "blobfs_create",
00:07:50.397    "blobfs_detect",
00:07:50.397    "blobfs_set_cache_size",
00:07:50.397    "bdev_aio_delete",
00:07:50.397    "bdev_aio_rescan",
00:07:50.397    "bdev_aio_create",
00:07:50.397    "bdev_ftl_set_property",
00:07:50.397    "bdev_ftl_get_properties",
00:07:50.397    "bdev_ftl_get_stats",
00:07:50.397    "bdev_ftl_unmap",
00:07:50.397    "bdev_ftl_unload",
00:07:50.397    "bdev_ftl_delete",
00:07:50.397    "bdev_ftl_load",
00:07:50.397    "bdev_ftl_create",
00:07:50.397    "bdev_virtio_attach_controller",
00:07:50.397    "bdev_virtio_scsi_get_devices",
00:07:50.397    "bdev_virtio_detach_controller",
00:07:50.397    "bdev_virtio_blk_set_hotplug",
00:07:50.397    "bdev_iscsi_delete",
00:07:50.397    "bdev_iscsi_create",
00:07:50.397    "bdev_iscsi_set_options",
00:07:50.397    "accel_error_inject_error",
00:07:50.397    "ioat_scan_accel_module",
00:07:50.397    "dsa_scan_accel_module",
00:07:50.397    "iaa_scan_accel_module",
00:07:50.397    "keyring_file_remove_key",
00:07:50.397    "keyring_file_add_key",
00:07:50.397    "keyring_linux_set_options",
00:07:50.397    "fsdev_aio_delete",
00:07:50.397    "fsdev_aio_create",
00:07:50.397    "iscsi_get_histogram",
00:07:50.397    "iscsi_enable_histogram",
00:07:50.397    "iscsi_set_options",
00:07:50.397    "iscsi_get_auth_groups",
00:07:50.397    "iscsi_auth_group_remove_secret",
00:07:50.397    "iscsi_auth_group_add_secret",
00:07:50.397    "iscsi_delete_auth_group",
00:07:50.397    "iscsi_create_auth_group",
00:07:50.397    "iscsi_set_discovery_auth",
00:07:50.397    "iscsi_get_options",
00:07:50.397    "iscsi_target_node_request_logout",
00:07:50.397    "iscsi_target_node_set_redirect",
00:07:50.397    "iscsi_target_node_set_auth",
00:07:50.397    "iscsi_target_node_add_lun",
00:07:50.397    "iscsi_get_stats",
00:07:50.397    "iscsi_get_connections",
00:07:50.397    "iscsi_portal_group_set_auth",
00:07:50.397    "iscsi_start_portal_group",
00:07:50.397    "iscsi_delete_portal_group",
00:07:50.397    "iscsi_create_portal_group",
00:07:50.397    "iscsi_get_portal_groups",
00:07:50.397    "iscsi_delete_target_node",
00:07:50.397    "iscsi_target_node_remove_pg_ig_maps",
00:07:50.397    "iscsi_target_node_add_pg_ig_maps",
00:07:50.397    "iscsi_create_target_node",
00:07:50.397    "iscsi_get_target_nodes",
00:07:50.397    "iscsi_delete_initiator_group",
00:07:50.397    "iscsi_initiator_group_remove_initiators",
00:07:50.397    "iscsi_initiator_group_add_initiators",
00:07:50.397    "iscsi_create_initiator_group",
00:07:50.397    "iscsi_get_initiator_groups",
00:07:50.397    "nvmf_set_crdt",
00:07:50.397    "nvmf_set_config",
00:07:50.397    "nvmf_set_max_subsystems",
00:07:50.397    "nvmf_stop_mdns_prr",
00:07:50.397    "nvmf_publish_mdns_prr",
00:07:50.397    "nvmf_subsystem_get_listeners",
00:07:50.397    "nvmf_subsystem_get_qpairs",
00:07:50.397    "nvmf_subsystem_get_controllers",
00:07:50.397    "nvmf_get_stats",
00:07:50.397    "nvmf_get_transports",
00:07:50.397    "nvmf_create_transport",
00:07:50.397    "nvmf_get_targets",
00:07:50.397    "nvmf_delete_target",
00:07:50.397    "nvmf_create_target",
00:07:50.397    "nvmf_subsystem_allow_any_host",
00:07:50.397    "nvmf_subsystem_set_keys",
00:07:50.397    "nvmf_subsystem_remove_host",
00:07:50.397    "nvmf_subsystem_add_host",
00:07:50.397    "nvmf_ns_remove_host",
00:07:50.397    "nvmf_ns_add_host",
00:07:50.397    "nvmf_subsystem_remove_ns",
00:07:50.397    "nvmf_subsystem_set_ns_ana_group",
00:07:50.397    "nvmf_subsystem_add_ns",
00:07:50.397    "nvmf_subsystem_listener_set_ana_state",
00:07:50.397    "nvmf_discovery_get_referrals",
00:07:50.397    "nvmf_discovery_remove_referral",
00:07:50.397    "nvmf_discovery_add_referral",
00:07:50.397    "nvmf_subsystem_remove_listener",
00:07:50.397    "nvmf_subsystem_add_listener",
00:07:50.397    "nvmf_delete_subsystem",
00:07:50.397    "nvmf_create_subsystem",
00:07:50.397    "nvmf_get_subsystems",
00:07:50.397    "env_dpdk_get_mem_stats",
00:07:50.397    "nbd_get_disks",
00:07:50.397    "nbd_stop_disk",
00:07:50.397    "nbd_start_disk",
00:07:50.397    "ublk_recover_disk",
00:07:50.397    "ublk_get_disks",
00:07:50.397    "ublk_stop_disk",
00:07:50.397    "ublk_start_disk",
00:07:50.397    "ublk_destroy_target",
00:07:50.397    "ublk_create_target",
00:07:50.397    "virtio_blk_create_transport",
00:07:50.397    "virtio_blk_get_transports",
00:07:50.397    "vhost_controller_set_coalescing",
00:07:50.397    "vhost_get_controllers",
00:07:50.397    "vhost_delete_controller",
00:07:50.397    "vhost_create_blk_controller",
00:07:50.397    "vhost_scsi_controller_remove_target",
00:07:50.397    "vhost_scsi_controller_add_target",
00:07:50.397    "vhost_start_scsi_controller",
00:07:50.397    "vhost_create_scsi_controller",
00:07:50.397    "thread_set_cpumask",
00:07:50.397    "scheduler_set_options",
00:07:50.397    "framework_get_governor",
00:07:50.397    "framework_get_scheduler",
00:07:50.397    "framework_set_scheduler",
00:07:50.397    "framework_get_reactors",
00:07:50.397    "thread_get_io_channels",
00:07:50.397    "thread_get_pollers",
00:07:50.397    "thread_get_stats",
00:07:50.397    "framework_monitor_context_switch",
00:07:50.397    "spdk_kill_instance",
00:07:50.397    "log_enable_timestamps",
00:07:50.397    "log_get_flags",
00:07:50.397    "log_clear_flag",
00:07:50.397    "log_set_flag",
00:07:50.397    "log_get_level",
00:07:50.397    "log_set_level",
00:07:50.397    "log_get_print_level",
00:07:50.397    "log_set_print_level",
00:07:50.397    "framework_enable_cpumask_locks",
00:07:50.397    "framework_disable_cpumask_locks",
00:07:50.397    "framework_wait_init",
00:07:50.397    "framework_start_init",
00:07:50.397    "scsi_get_devices",
00:07:50.397    "bdev_get_histogram",
00:07:50.397    "bdev_enable_histogram",
00:07:50.397    "bdev_set_qos_limit",
00:07:50.397    "bdev_set_qd_sampling_period",
00:07:50.397    "bdev_get_bdevs",
00:07:50.397    "bdev_reset_iostat",
00:07:50.397    "bdev_get_iostat",
00:07:50.397    "bdev_examine",
00:07:50.397    "bdev_wait_for_examine",
00:07:50.397    "bdev_set_options",
00:07:50.397    "accel_get_stats",
00:07:50.397    "accel_set_options",
00:07:50.397    "accel_set_driver",
00:07:50.397    "accel_crypto_key_destroy",
00:07:50.397    "accel_crypto_keys_get",
00:07:50.397    "accel_crypto_key_create",
00:07:50.397    "accel_assign_opc",
00:07:50.397    "accel_get_module_info",
00:07:50.397    "accel_get_opc_assignments",
00:07:50.397    "vmd_rescan",
00:07:50.397    "vmd_remove_device",
00:07:50.398    "vmd_enable",
00:07:50.398    "sock_get_default_impl",
00:07:50.398    "sock_set_default_impl",
00:07:50.398    "sock_impl_set_options",
00:07:50.398    "sock_impl_get_options",
00:07:50.398    "iobuf_get_stats",
00:07:50.398    "iobuf_set_options",
00:07:50.398    "keyring_get_keys",
00:07:50.398    "framework_get_pci_devices",
00:07:50.398    "framework_get_config",
00:07:50.398    "framework_get_subsystems",
00:07:50.398    "fsdev_set_opts",
00:07:50.398    "fsdev_get_opts",
00:07:50.398    "trace_get_info",
00:07:50.398    "trace_get_tpoint_group_mask",
00:07:50.398    "trace_disable_tpoint_group",
00:07:50.398    "trace_enable_tpoint_group",
00:07:50.398    "trace_clear_tpoint_mask",
00:07:50.398    "trace_set_tpoint_mask",
00:07:50.398    "notify_get_notifications",
00:07:50.398    "notify_get_types",
00:07:50.398    "spdk_get_version",
00:07:50.398    "rpc_get_methods"
00:07:50.398  ]
00:07:50.398   10:35:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:50.398   10:35:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:07:50.398   10:35:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1843699
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1843699 ']'
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1843699
00:07:50.398    10:35:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:50.398    10:35:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843699
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843699'
00:07:50.398  killing process with pid 1843699
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1843699
00:07:50.398   10:35:40 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1843699
00:07:52.924  
00:07:52.924  real	0m3.952s
00:07:52.924  user	0m7.053s
00:07:52.924  sys	0m0.705s
00:07:52.924   10:35:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:52.924   10:35:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:07:52.924  ************************************
00:07:52.924  END TEST spdkcli_tcp
00:07:52.924  ************************************
00:07:52.924   10:35:42  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vhost-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:52.924   10:35:42  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:52.924   10:35:42  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:52.924   10:35:42  -- common/autotest_common.sh@10 -- # set +x
00:07:52.924  ************************************
00:07:52.924  START TEST dpdk_mem_utility
00:07:52.924  ************************************
00:07:52.925   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:07:52.925  * Looking for test storage...
00:07:52.925  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/dpdk_memory_utility
00:07:52.925    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:52.925     10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:07:52.925     10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:53.183    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:53.183     10:35:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:53.183    10:35:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:07:53.183    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:53.183    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:53.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.183  		--rc genhtml_branch_coverage=1
00:07:53.183  		--rc genhtml_function_coverage=1
00:07:53.183  		--rc genhtml_legend=1
00:07:53.183  		--rc geninfo_all_blocks=1
00:07:53.183  		--rc geninfo_unexecuted_blocks=1
00:07:53.183  		
00:07:53.183  		'
00:07:53.183    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:53.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.183  		--rc genhtml_branch_coverage=1
00:07:53.183  		--rc genhtml_function_coverage=1
00:07:53.183  		--rc genhtml_legend=1
00:07:53.183  		--rc geninfo_all_blocks=1
00:07:53.183  		--rc geninfo_unexecuted_blocks=1
00:07:53.183  		
00:07:53.183  		'
00:07:53.183    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:53.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.183  		--rc genhtml_branch_coverage=1
00:07:53.183  		--rc genhtml_function_coverage=1
00:07:53.183  		--rc genhtml_legend=1
00:07:53.183  		--rc geninfo_all_blocks=1
00:07:53.183  		--rc geninfo_unexecuted_blocks=1
00:07:53.183  		
00:07:53.183  		'
00:07:53.183    10:35:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:53.183  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:53.183  		--rc genhtml_branch_coverage=1
00:07:53.183  		--rc genhtml_function_coverage=1
00:07:53.183  		--rc genhtml_legend=1
00:07:53.183  		--rc geninfo_all_blocks=1
00:07:53.183  		--rc geninfo_unexecuted_blocks=1
00:07:53.183  		
00:07:53.183  		'
00:07:53.183   10:35:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:07:53.183   10:35:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1844325
00:07:53.183   10:35:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1844325
00:07:53.183   10:35:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt
00:07:53.183   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1844325 ']'
00:07:53.183   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:53.183   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:53.183   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:53.183  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:53.183   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:53.183   10:35:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:53.183  [2024-11-19 10:35:42.878721] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:53.183  [2024-11-19 10:35:42.878828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844325 ]
00:07:53.442  [2024-11-19 10:35:43.007012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:53.442  [2024-11-19 10:35:43.106346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:54.378   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:54.378   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:07:54.378   10:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:07:54.378   10:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:07:54.378   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:54.378   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:54.378  {
00:07:54.378  "filename": "/tmp/spdk_mem_dump.txt"
00:07:54.378  }
00:07:54.378   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:54.378   10:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:07:54.378  DPDK memory size 816.000000 MiB in 1 heap(s)
00:07:54.378  1 heaps totaling size 816.000000 MiB
00:07:54.378    size:  816.000000 MiB heap id: 0
00:07:54.378  end heaps----------
00:07:54.378  9 mempools totaling size 595.772034 MiB
00:07:54.378    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:07:54.378    size:  158.602051 MiB name: PDU_data_out_Pool
00:07:54.378    size:   92.545471 MiB name: bdev_io_1844325
00:07:54.378    size:   50.003479 MiB name: msgpool_1844325
00:07:54.378    size:   36.509338 MiB name: fsdev_io_1844325
00:07:54.378    size:   21.763794 MiB name: PDU_Pool
00:07:54.378    size:   19.513306 MiB name: SCSI_TASK_Pool
00:07:54.378    size:    4.133484 MiB name: evtpool_1844325
00:07:54.378    size:    0.026123 MiB name: Session_Pool
00:07:54.378  end mempools-------
00:07:54.378  6 memzones totaling size 4.142822 MiB
00:07:54.378    size:    1.000366 MiB name: RG_ring_0_1844325
00:07:54.378    size:    1.000366 MiB name: RG_ring_1_1844325
00:07:54.378    size:    1.000366 MiB name: RG_ring_4_1844325
00:07:54.378    size:    1.000366 MiB name: RG_ring_5_1844325
00:07:54.378    size:    0.125366 MiB name: RG_ring_2_1844325
00:07:54.378    size:    0.015991 MiB name: RG_ring_3_1844325
00:07:54.378  end memzones-------
00:07:54.378   10:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:07:54.378  heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19
00:07:54.378    list of free elements. size: 16.857605 MiB
00:07:54.378      element at address: 0x200006400000 with size:    1.995972 MiB
00:07:54.378      element at address: 0x20000a600000 with size:    1.995972 MiB
00:07:54.378      element at address: 0x200003e00000 with size:    1.991028 MiB
00:07:54.378      element at address: 0x200018d00040 with size:    0.999939 MiB
00:07:54.378      element at address: 0x200019100040 with size:    0.999939 MiB
00:07:54.378      element at address: 0x200019200000 with size:    0.999329 MiB
00:07:54.378      element at address: 0x200000400000 with size:    0.998108 MiB
00:07:54.378      element at address: 0x200031e00000 with size:    0.994324 MiB
00:07:54.378      element at address: 0x200018a00000 with size:    0.959900 MiB
00:07:54.378      element at address: 0x200019500040 with size:    0.937256 MiB
00:07:54.378      element at address: 0x200000200000 with size:    0.716980 MiB
00:07:54.378      element at address: 0x20001ac00000 with size:    0.583191 MiB
00:07:54.378      element at address: 0x200000c00000 with size:    0.495300 MiB
00:07:54.379      element at address: 0x200018e00000 with size:    0.491150 MiB
00:07:54.379      element at address: 0x200019600000 with size:    0.485657 MiB
00:07:54.379      element at address: 0x200012c00000 with size:    0.446167 MiB
00:07:54.379      element at address: 0x200028000000 with size:    0.411072 MiB
00:07:54.379      element at address: 0x200000800000 with size:    0.355286 MiB
00:07:54.379      element at address: 0x20000a5ff040 with size:    0.001038 MiB
00:07:54.379    list of standard malloc elements. size: 199.221497 MiB
00:07:54.379      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:07:54.379      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:07:54.379      element at address: 0x200018bfff80 with size:    1.000183 MiB
00:07:54.379      element at address: 0x200018ffff80 with size:    1.000183 MiB
00:07:54.379      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:07:54.379      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:07:54.379      element at address: 0x2000195eff40 with size:    0.062683 MiB
00:07:54.379      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:07:54.379      element at address: 0x200012bff040 with size:    0.000427 MiB
00:07:54.379      element at address: 0x200012bffa00 with size:    0.000366 MiB
00:07:54.379      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000004ffa40 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:07:54.379      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200000cff000 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ff480 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ff580 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ff680 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ff780 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ff880 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ff980 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:07:54.379      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff200 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff300 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff400 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff500 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff600 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff700 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff800 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bff900 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:07:54.379      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:07:54.379    list of memzone associated elements. size: 599.920898 MiB
00:07:54.379      element at address: 0x20001ac954c0 with size:  211.416809 MiB
00:07:54.379        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:07:54.379      element at address: 0x20002806ff80 with size:  157.562622 MiB
00:07:54.379        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:07:54.379      element at address: 0x200012df4740 with size:   92.045105 MiB
00:07:54.379        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_1844325_0
00:07:54.379      element at address: 0x200000dff340 with size:   48.003113 MiB
00:07:54.379        associated memzone info: size:   48.002930 MiB name: MP_msgpool_1844325_0
00:07:54.379      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:07:54.379        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_1844325_0
00:07:54.379      element at address: 0x2000197be900 with size:   20.255615 MiB
00:07:54.379        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:07:54.379      element at address: 0x200031ffeb00 with size:   18.005127 MiB
00:07:54.379        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:07:54.379      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:07:54.379        associated memzone info: size:    3.000122 MiB name: MP_evtpool_1844325_0
00:07:54.379      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:07:54.379        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_1844325
00:07:54.379      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:07:54.379        associated memzone info: size:    1.007996 MiB name: MP_evtpool_1844325
00:07:54.379      element at address: 0x200018efde00 with size:    1.008179 MiB
00:07:54.379        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:07:54.379      element at address: 0x2000196bc780 with size:    1.008179 MiB
00:07:54.379        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:07:54.379      element at address: 0x200018afde00 with size:    1.008179 MiB
00:07:54.379        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:07:54.379      element at address: 0x200012cf25c0 with size:    1.008179 MiB
00:07:54.379        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:07:54.379      element at address: 0x200000cff100 with size:    1.000549 MiB
00:07:54.379        associated memzone info: size:    1.000366 MiB name: RG_ring_0_1844325
00:07:54.379      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:07:54.379        associated memzone info: size:    1.000366 MiB name: RG_ring_1_1844325
00:07:54.379      element at address: 0x2000192ffd40 with size:    1.000549 MiB
00:07:54.379        associated memzone info: size:    1.000366 MiB name: RG_ring_4_1844325
00:07:54.379      element at address: 0x200031efe8c0 with size:    1.000549 MiB
00:07:54.379        associated memzone info: size:    1.000366 MiB name: RG_ring_5_1844325
00:07:54.379      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:07:54.379        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_1844325
00:07:54.379      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:07:54.379        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_1844325
00:07:54.379      element at address: 0x200018e7dbc0 with size:    0.500549 MiB
00:07:54.379        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:07:54.379      element at address: 0x200012c72380 with size:    0.500549 MiB
00:07:54.379        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:07:54.379      element at address: 0x20001967c540 with size:    0.250549 MiB
00:07:54.379        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:07:54.379      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:07:54.379        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_1844325
00:07:54.379      element at address: 0x20000085f180 with size:    0.125549 MiB
00:07:54.379        associated memzone info: size:    0.125366 MiB name: RG_ring_2_1844325
00:07:54.379      element at address: 0x200018af5bc0 with size:    0.031799 MiB
00:07:54.379        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:07:54.379      element at address: 0x2000280693c0 with size:    0.023804 MiB
00:07:54.379        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:07:54.379      element at address: 0x20000085af40 with size:    0.016174 MiB
00:07:54.379        associated memzone info: size:    0.015991 MiB name: RG_ring_3_1844325
00:07:54.379      element at address: 0x20002806f540 with size:    0.002502 MiB
00:07:54.379        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:07:54.379      element at address: 0x2000004ffb40 with size:    0.000366 MiB
00:07:54.379        associated memzone info: size:    0.000183 MiB name: MP_msgpool_1844325
00:07:54.379      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:07:54.379        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_1844325
00:07:54.379      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:07:54.379        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_1844325
00:07:54.379      element at address: 0x20000a5ffa80 with size:    0.000366 MiB
00:07:54.379        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:07:54.379   10:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:07:54.379   10:35:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1844325
00:07:54.379   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1844325 ']'
00:07:54.379   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1844325
00:07:54.379    10:35:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:07:54.379   10:35:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:54.379    10:35:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844325
00:07:54.379   10:35:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:54.379   10:35:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:54.379   10:35:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844325'
00:07:54.379  killing process with pid 1844325
00:07:54.379   10:35:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1844325
00:07:54.379   10:35:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1844325
00:07:56.911  
00:07:56.911  real	0m3.676s
00:07:56.911  user	0m3.583s
00:07:56.911  sys	0m0.600s
00:07:56.911   10:35:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:56.911   10:35:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:07:56.911  ************************************
00:07:56.911  END TEST dpdk_mem_utility
00:07:56.911  ************************************
00:07:56.911   10:35:46  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/event.sh
00:07:56.911   10:35:46  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:56.911   10:35:46  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:56.911   10:35:46  -- common/autotest_common.sh@10 -- # set +x
00:07:56.911  ************************************
00:07:56.911  START TEST event
00:07:56.911  ************************************
00:07:56.911   10:35:46 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/event.sh
00:07:56.911  * Looking for test storage...
00:07:56.911  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event
00:07:56.911    10:35:46 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:56.911     10:35:46 event -- common/autotest_common.sh@1693 -- # lcov --version
00:07:56.911     10:35:46 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:56.911    10:35:46 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:56.911    10:35:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:56.911    10:35:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:56.911    10:35:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:56.911    10:35:46 event -- scripts/common.sh@336 -- # IFS=.-:
00:07:56.911    10:35:46 event -- scripts/common.sh@336 -- # read -ra ver1
00:07:56.911    10:35:46 event -- scripts/common.sh@337 -- # IFS=.-:
00:07:56.911    10:35:46 event -- scripts/common.sh@337 -- # read -ra ver2
00:07:56.911    10:35:46 event -- scripts/common.sh@338 -- # local 'op=<'
00:07:56.911    10:35:46 event -- scripts/common.sh@340 -- # ver1_l=2
00:07:56.911    10:35:46 event -- scripts/common.sh@341 -- # ver2_l=1
00:07:56.911    10:35:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:56.911    10:35:46 event -- scripts/common.sh@344 -- # case "$op" in
00:07:56.911    10:35:46 event -- scripts/common.sh@345 -- # : 1
00:07:56.911    10:35:46 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:56.911    10:35:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:56.911     10:35:46 event -- scripts/common.sh@365 -- # decimal 1
00:07:56.911     10:35:46 event -- scripts/common.sh@353 -- # local d=1
00:07:56.911     10:35:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:56.911     10:35:46 event -- scripts/common.sh@355 -- # echo 1
00:07:56.911    10:35:46 event -- scripts/common.sh@365 -- # ver1[v]=1
00:07:56.911     10:35:46 event -- scripts/common.sh@366 -- # decimal 2
00:07:56.911     10:35:46 event -- scripts/common.sh@353 -- # local d=2
00:07:56.911     10:35:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:56.911     10:35:46 event -- scripts/common.sh@355 -- # echo 2
00:07:56.911    10:35:46 event -- scripts/common.sh@366 -- # ver2[v]=2
00:07:56.911    10:35:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:56.911    10:35:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:56.911    10:35:46 event -- scripts/common.sh@368 -- # return 0
00:07:56.911    10:35:46 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:56.911    10:35:46 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:56.911  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.911  		--rc genhtml_branch_coverage=1
00:07:56.911  		--rc genhtml_function_coverage=1
00:07:56.911  		--rc genhtml_legend=1
00:07:56.911  		--rc geninfo_all_blocks=1
00:07:56.911  		--rc geninfo_unexecuted_blocks=1
00:07:56.911  		
00:07:56.911  		'
00:07:56.912    10:35:46 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:56.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.912  		--rc genhtml_branch_coverage=1
00:07:56.912  		--rc genhtml_function_coverage=1
00:07:56.912  		--rc genhtml_legend=1
00:07:56.912  		--rc geninfo_all_blocks=1
00:07:56.912  		--rc geninfo_unexecuted_blocks=1
00:07:56.912  		
00:07:56.912  		'
00:07:56.912    10:35:46 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:56.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.912  		--rc genhtml_branch_coverage=1
00:07:56.912  		--rc genhtml_function_coverage=1
00:07:56.912  		--rc genhtml_legend=1
00:07:56.912  		--rc geninfo_all_blocks=1
00:07:56.912  		--rc geninfo_unexecuted_blocks=1
00:07:56.912  		
00:07:56.912  		'
00:07:56.912    10:35:46 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:56.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:56.912  		--rc genhtml_branch_coverage=1
00:07:56.912  		--rc genhtml_function_coverage=1
00:07:56.912  		--rc genhtml_legend=1
00:07:56.912  		--rc geninfo_all_blocks=1
00:07:56.912  		--rc geninfo_unexecuted_blocks=1
00:07:56.912  		
00:07:56.912  		'
00:07:56.912   10:35:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/bdev/nbd_common.sh
00:07:56.912    10:35:46 event -- bdev/nbd_common.sh@6 -- # set -e
00:07:56.912   10:35:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:56.912   10:35:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:56.912   10:35:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:56.912   10:35:46 event -- common/autotest_common.sh@10 -- # set +x
00:07:56.912  ************************************
00:07:56.912  START TEST event_perf
00:07:56.912  ************************************
00:07:56.912   10:35:46 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:07:56.912  Running I/O for 1 seconds...[2024-11-19 10:35:46.610944] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:56.912  [2024-11-19 10:35:46.611033] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844830 ]
00:07:57.170  [2024-11-19 10:35:46.748788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:57.170  [2024-11-19 10:35:46.858260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:57.170  [2024-11-19 10:35:46.858327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:57.170  [2024-11-19 10:35:46.858385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:57.170  [2024-11-19 10:35:46.858398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:58.549  Running I/O for 1 seconds...
00:07:58.549  lcore  0:   208073
00:07:58.549  lcore  1:   208072
00:07:58.549  lcore  2:   208073
00:07:58.549  lcore  3:   208072
00:07:58.549  done.
00:07:58.549  
00:07:58.549  real	0m1.527s
00:07:58.549  user	0m4.354s
00:07:58.549  sys	0m0.168s
00:07:58.549   10:35:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:58.549   10:35:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:07:58.549  ************************************
00:07:58.549  END TEST event_perf
00:07:58.549  ************************************
00:07:58.549   10:35:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:07:58.549   10:35:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:58.549   10:35:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:58.549   10:35:48 event -- common/autotest_common.sh@10 -- # set +x
00:07:58.549  ************************************
00:07:58.549  START TEST event_reactor
00:07:58.549  ************************************
00:07:58.549   10:35:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:07:58.549  [2024-11-19 10:35:48.221440] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:07:58.549  [2024-11-19 10:35:48.221536] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845124 ]
00:07:58.808  [2024-11-19 10:35:48.360840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:58.808  [2024-11-19 10:35:48.462857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:00.186  test_start
00:08:00.186  oneshot
00:08:00.186  tick 100
00:08:00.186  tick 100
00:08:00.186  tick 250
00:08:00.186  tick 100
00:08:00.186  tick 100
00:08:00.186  tick 100
00:08:00.186  tick 250
00:08:00.186  tick 500
00:08:00.186  tick 100
00:08:00.186  tick 100
00:08:00.186  tick 250
00:08:00.186  tick 100
00:08:00.186  tick 100
00:08:00.186  test_end
00:08:00.186  
00:08:00.186  real	0m1.510s
00:08:00.186  user	0m1.344s
00:08:00.186  sys	0m0.159s
00:08:00.186   10:35:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:00.186   10:35:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:08:00.186  ************************************
00:08:00.186  END TEST event_reactor
00:08:00.186  ************************************
00:08:00.186   10:35:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:08:00.186   10:35:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:08:00.186   10:35:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:00.186   10:35:49 event -- common/autotest_common.sh@10 -- # set +x
00:08:00.186  ************************************
00:08:00.186  START TEST event_reactor_perf
00:08:00.186  ************************************
00:08:00.186   10:35:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:08:00.186  [2024-11-19 10:35:49.814571] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:00.186  [2024-11-19 10:35:49.814668] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845411 ]
00:08:00.186  [2024-11-19 10:35:49.951475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:00.445  [2024-11-19 10:35:50.068315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:01.819  test_start
00:08:01.819  test_end
00:08:01.819  Performance:   381849 events per second
00:08:01.819  
00:08:01.819  real	0m1.517s
00:08:01.819  user	0m1.344s
00:08:01.819  sys	0m0.164s
00:08:01.819   10:35:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:01.819   10:35:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:08:01.819  ************************************
00:08:01.819  END TEST event_reactor_perf
00:08:01.819  ************************************
00:08:01.819    10:35:51 event -- event/event.sh@49 -- # uname -s
00:08:01.819   10:35:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:08:01.819   10:35:51 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:08:01.819   10:35:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:01.819   10:35:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:01.819   10:35:51 event -- common/autotest_common.sh@10 -- # set +x
00:08:01.819  ************************************
00:08:01.819  START TEST event_scheduler
00:08:01.819  ************************************
00:08:01.819   10:35:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:08:01.819  * Looking for test storage...
00:08:01.819  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:01.819     10:35:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:08:01.819     10:35:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:01.819     10:35:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:01.819    10:35:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:01.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:01.819  		--rc genhtml_branch_coverage=1
00:08:01.819  		--rc genhtml_function_coverage=1
00:08:01.819  		--rc genhtml_legend=1
00:08:01.819  		--rc geninfo_all_blocks=1
00:08:01.819  		--rc geninfo_unexecuted_blocks=1
00:08:01.819  		
00:08:01.819  		'
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:01.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:01.819  		--rc genhtml_branch_coverage=1
00:08:01.819  		--rc genhtml_function_coverage=1
00:08:01.819  		--rc genhtml_legend=1
00:08:01.819  		--rc geninfo_all_blocks=1
00:08:01.819  		--rc geninfo_unexecuted_blocks=1
00:08:01.819  		
00:08:01.819  		'
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:01.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:01.819  		--rc genhtml_branch_coverage=1
00:08:01.819  		--rc genhtml_function_coverage=1
00:08:01.819  		--rc genhtml_legend=1
00:08:01.819  		--rc geninfo_all_blocks=1
00:08:01.819  		--rc geninfo_unexecuted_blocks=1
00:08:01.819  		
00:08:01.819  		'
00:08:01.819    10:35:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:01.819  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:01.819  		--rc genhtml_branch_coverage=1
00:08:01.819  		--rc genhtml_function_coverage=1
00:08:01.820  		--rc genhtml_legend=1
00:08:01.820  		--rc geninfo_all_blocks=1
00:08:01.820  		--rc geninfo_unexecuted_blocks=1
00:08:01.820  		
00:08:01.820  		'
00:08:01.820   10:35:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:08:01.820   10:35:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1845647
00:08:01.820   10:35:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:08:01.820   10:35:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1845647
00:08:01.820   10:35:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1845647 ']'
00:08:01.820   10:35:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:01.820   10:35:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:01.820   10:35:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:01.820  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:01.820   10:35:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:01.820   10:35:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:08:01.820   10:35:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:08:02.078  [2024-11-19 10:35:51.650768] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:02.078  [2024-11-19 10:35:51.650875] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845647 ]
00:08:02.078  [2024-11-19 10:35:51.784968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:02.337  [2024-11-19 10:35:51.891984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:02.337  [2024-11-19 10:35:51.892045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:02.337  [2024-11-19 10:35:51.892101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:02.337  [2024-11-19 10:35:51.892114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:08:02.905   10:35:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:08:02.905  [2024-11-19 10:35:52.482632] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:08:02.905  [2024-11-19 10:35:52.482667] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:08:02.905  [2024-11-19 10:35:52.482688] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:08:02.905  [2024-11-19 10:35:52.482700] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:08:02.905  [2024-11-19 10:35:52.482712] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:02.905   10:35:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:02.905   10:35:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:08:03.165  [2024-11-19 10:35:52.764150] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:08:03.165   10:35:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.165   10:35:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:08:03.165   10:35:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:03.165   10:35:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:03.165   10:35:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:08:03.165  ************************************
00:08:03.165  START TEST scheduler_create_thread
00:08:03.165  ************************************
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.165  2
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.165  3
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.165  4
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.165  5
00:08:03.165   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166  6
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166  7
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166  8
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166  9
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166  10
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:03.166   10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:03.166    10:35:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:05.071    10:35:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:05.071   10:35:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:08:05.071   10:35:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:08:05.071   10:35:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:05.071   10:35:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:05.638   10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:05.638  
00:08:05.638  real	0m2.621s
00:08:05.638  user	0m0.020s
00:08:05.638  sys	0m0.012s
00:08:05.638   10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:05.638   10:35:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:08:05.638  ************************************
00:08:05.638  END TEST scheduler_create_thread
00:08:05.638  ************************************
00:08:05.897   10:35:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:08:05.897   10:35:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1845647
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1845647 ']'
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1845647
00:08:05.897    10:35:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:05.897    10:35:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845647
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845647'
00:08:05.897  killing process with pid 1845647
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1845647
00:08:05.897   10:35:55 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1845647
00:08:06.156  [2024-11-19 10:35:55.904229] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:08:07.533  
00:08:07.533  real	0m5.621s
00:08:07.533  user	0m9.824s
00:08:07.533  sys	0m0.600s
00:08:07.533   10:35:56 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:07.533   10:35:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:08:07.533  ************************************
00:08:07.533  END TEST event_scheduler
00:08:07.533  ************************************
00:08:07.533   10:35:57 event -- event/event.sh@51 -- # modprobe -n nbd
00:08:07.533   10:35:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:08:07.533   10:35:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:07.533   10:35:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:07.533   10:35:57 event -- common/autotest_common.sh@10 -- # set +x
00:08:07.533  ************************************
00:08:07.533  START TEST app_repeat
00:08:07.533  ************************************
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1846435
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1846435'
00:08:07.533  Process app_repeat pid: 1846435
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:08:07.533  spdk_app_start Round 0
00:08:07.533   10:35:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1846435 /var/tmp/spdk-nbd.sock
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1846435 ']'
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:07.533  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:07.533   10:35:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:08:07.533  [2024-11-19 10:35:57.155558] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:07.533  [2024-11-19 10:35:57.155667] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846435 ]
00:08:07.533  [2024-11-19 10:35:57.293371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:07.792  [2024-11-19 10:35:57.398303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:07.792  [2024-11-19 10:35:57.398317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:08.358   10:35:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:08.358   10:35:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:08:08.358   10:35:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:08.616  Malloc0
00:08:08.616   10:35:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:08.874  Malloc1
00:08:08.874   10:35:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:08.874   10:35:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:08.875   10:35:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:08:09.133  /dev/nbd0
00:08:09.133    10:35:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:09.133   10:35:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:09.133  1+0 records in
00:08:09.133  1+0 records out
00:08:09.133  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237011 s, 17.3 MB/s
00:08:09.133    10:35:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:09.133   10:35:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:08:09.133   10:35:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:09.133   10:35:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:09.133   10:35:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:08:09.392  /dev/nbd1
00:08:09.392    10:35:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:09.392   10:35:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:09.392  1+0 records in
00:08:09.392  1+0 records out
00:08:09.392  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241833 s, 16.9 MB/s
00:08:09.392    10:35:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:09.392   10:35:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:08:09.392   10:35:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:09.392   10:35:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:09.392    10:35:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:09.392    10:35:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:09.392     10:35:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:09.392    10:35:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:09.392    {
00:08:09.392      "nbd_device": "/dev/nbd0",
00:08:09.392      "bdev_name": "Malloc0"
00:08:09.392    },
00:08:09.392    {
00:08:09.392      "nbd_device": "/dev/nbd1",
00:08:09.392      "bdev_name": "Malloc1"
00:08:09.392    }
00:08:09.392  ]'
00:08:09.653     10:35:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:08:09.653    {
00:08:09.653      "nbd_device": "/dev/nbd0",
00:08:09.653      "bdev_name": "Malloc0"
00:08:09.653    },
00:08:09.653    {
00:08:09.653      "nbd_device": "/dev/nbd1",
00:08:09.653      "bdev_name": "Malloc1"
00:08:09.653    }
00:08:09.653  ]'
00:08:09.653     10:35:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:09.653    10:35:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:09.653  /dev/nbd1'
00:08:09.653     10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:09.653     10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:09.653  /dev/nbd1'
00:08:09.653    10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:08:09.653    10:35:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:08:09.653   10:35:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:08:09.653   10:35:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:08:09.653   10:35:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:08:09.654  256+0 records in
00:08:09.654  256+0 records out
00:08:09.654  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106703 s, 98.3 MB/s
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:09.654  256+0 records in
00:08:09.654  256+0 records out
00:08:09.654  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164919 s, 63.6 MB/s
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:09.654  256+0 records in
00:08:09.654  256+0 records out
00:08:09.654  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184827 s, 56.7 MB/s
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:09.654   10:35:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:09.923    10:35:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:09.923   10:35:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:10.200    10:35:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:08:10.200   10:35:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:08:10.200    10:35:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:10.200    10:35:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:10.200     10:35:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:10.200    10:35:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:10.200     10:35:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:10.200     10:35:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:10.200    10:35:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:10.201     10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:08:10.201     10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:10.459     10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:08:10.459    10:35:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:08:10.459    10:35:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:08:10.459   10:35:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:08:10.459   10:35:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:10.459   10:35:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:08:10.459   10:35:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:08:10.718   10:36:00 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:08:12.096  [2024-11-19 10:36:01.543697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:12.096  [2024-11-19 10:36:01.646932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:12.096  [2024-11-19 10:36:01.646932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:12.096  [2024-11-19 10:36:01.829013] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:08:12.096  [2024-11-19 10:36:01.829081] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:08:14.001   10:36:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:08:14.001   10:36:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:08:14.001  spdk_app_start Round 1
00:08:14.001   10:36:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1846435 /var/tmp/spdk-nbd.sock
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1846435 ']'
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:14.001  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:14.001   10:36:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:08:14.001   10:36:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:14.259  Malloc0
00:08:14.259   10:36:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:14.259  Malloc1
00:08:14.518   10:36:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:08:14.518  /dev/nbd0
00:08:14.518    10:36:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:14.518   10:36:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:14.518   10:36:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:14.518   10:36:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:14.519   10:36:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:14.519  1+0 records in
00:08:14.519  1+0 records out
00:08:14.519  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243937 s, 16.8 MB/s
00:08:14.519    10:36:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:08:14.777   10:36:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:14.777   10:36:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:14.777   10:36:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:08:14.777  /dev/nbd1
00:08:14.777    10:36:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:14.777   10:36:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:14.777   10:36:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:15.062  1+0 records in
00:08:15.062  1+0 records out
00:08:15.062  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026398 s, 15.5 MB/s
00:08:15.062    10:36:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:15.062   10:36:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:15.062    10:36:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:15.062    10:36:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:15.062     10:36:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:15.062    10:36:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:15.062    {
00:08:15.062      "nbd_device": "/dev/nbd0",
00:08:15.062      "bdev_name": "Malloc0"
00:08:15.062    },
00:08:15.062    {
00:08:15.062      "nbd_device": "/dev/nbd1",
00:08:15.062      "bdev_name": "Malloc1"
00:08:15.062    }
00:08:15.062  ]'
00:08:15.062     10:36:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:08:15.062    {
00:08:15.062      "nbd_device": "/dev/nbd0",
00:08:15.062      "bdev_name": "Malloc0"
00:08:15.062    },
00:08:15.062    {
00:08:15.062      "nbd_device": "/dev/nbd1",
00:08:15.062      "bdev_name": "Malloc1"
00:08:15.062    }
00:08:15.062  ]'
00:08:15.062     10:36:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:15.062    10:36:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:15.062  /dev/nbd1'
00:08:15.062     10:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:15.062  /dev/nbd1'
00:08:15.062     10:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:15.062    10:36:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:08:15.062    10:36:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:08:15.062  256+0 records in
00:08:15.062  256+0 records out
00:08:15.062  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105962 s, 99.0 MB/s
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:15.062   10:36:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:15.322  256+0 records in
00:08:15.322  256+0 records out
00:08:15.322  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174645 s, 60.0 MB/s
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:15.322  256+0 records in
00:08:15.322  256+0 records out
00:08:15.322  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194614 s, 53.9 MB/s
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:15.322   10:36:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:15.582    10:36:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:15.582    10:36:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:08:15.582   10:36:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:08:15.582    10:36:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:15.582    10:36:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:15.582     10:36:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:15.842    10:36:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:15.842     10:36:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:15.842     10:36:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:15.842    10:36:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:15.842     10:36:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:08:15.842     10:36:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:15.842     10:36:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:08:15.842    10:36:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:08:15.842    10:36:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:08:15.842   10:36:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:08:15.842   10:36:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:15.842   10:36:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:08:15.842   10:36:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:08:16.411   10:36:05 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:08:17.791  [2024-11-19 10:36:07.150596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:17.791  [2024-11-19 10:36:07.253667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.791  [2024-11-19 10:36:07.253675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:17.791  [2024-11-19 10:36:07.435667] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:08:17.791  [2024-11-19 10:36:07.435732] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:08:19.696   10:36:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:08:19.696   10:36:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:08:19.696  spdk_app_start Round 2
00:08:19.696   10:36:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1846435 /var/tmp/spdk-nbd.sock
00:08:19.696   10:36:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1846435 ']'
00:08:19.696   10:36:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:19.696   10:36:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:19.696   10:36:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:19.696  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:19.696   10:36:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:19.696   10:36:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:08:19.696   10:36:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:19.696   10:36:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:08:19.696   10:36:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:19.696  Malloc0
00:08:19.696   10:36:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:08:19.955  Malloc1
00:08:19.955   10:36:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:19.955   10:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:08:20.215  /dev/nbd0
00:08:20.215    10:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:20.215   10:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:20.215  1+0 records in
00:08:20.215  1+0 records out
00:08:20.215  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237412 s, 17.3 MB/s
00:08:20.215    10:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:08:20.215   10:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:20.216   10:36:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:20.216   10:36:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:08:20.216   10:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:20.216   10:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:20.216   10:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:08:20.475  /dev/nbd1
00:08:20.475    10:36:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:20.475   10:36:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:08:20.475  1+0 records in
00:08:20.475  1+0 records out
00:08:20.475  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270542 s, 15.1 MB/s
00:08:20.475    10:36:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdtest
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:20.475   10:36:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:08:20.475   10:36:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:20.475   10:36:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:08:20.475    10:36:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:20.475    10:36:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:20.475     10:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:20.735    10:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:20.735    {
00:08:20.735      "nbd_device": "/dev/nbd0",
00:08:20.735      "bdev_name": "Malloc0"
00:08:20.735    },
00:08:20.735    {
00:08:20.735      "nbd_device": "/dev/nbd1",
00:08:20.735      "bdev_name": "Malloc1"
00:08:20.735    }
00:08:20.735  ]'
00:08:20.735     10:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:08:20.735    {
00:08:20.735      "nbd_device": "/dev/nbd0",
00:08:20.735      "bdev_name": "Malloc0"
00:08:20.735    },
00:08:20.735    {
00:08:20.735      "nbd_device": "/dev/nbd1",
00:08:20.735      "bdev_name": "Malloc1"
00:08:20.735    }
00:08:20.735  ]'
00:08:20.735     10:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:20.735    10:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:20.735  /dev/nbd1'
00:08:20.735     10:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:20.735  /dev/nbd1'
00:08:20.735     10:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:20.735    10:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:08:20.735    10:36:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:08:20.735  256+0 records in
00:08:20.735  256+0 records out
00:08:20.735  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105555 s, 99.3 MB/s
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:20.735  256+0 records in
00:08:20.735  256+0 records out
00:08:20.735  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157726 s, 66.5 MB/s
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:20.735  256+0 records in
00:08:20.735  256+0 records out
00:08:20.735  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177212 s, 59.2 MB/s
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/nbdrandtest
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:20.735   10:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:20.995    10:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:20.995   10:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:21.254    10:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:08:21.254   10:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:08:21.254    10:36:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:21.254    10:36:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:21.254     10:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:21.513    10:36:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:21.513     10:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:21.513     10:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:21.513    10:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:21.513     10:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:08:21.513     10:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:21.513     10:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:08:21.513    10:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:08:21.513    10:36:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:08:21.513   10:36:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:08:21.513   10:36:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:21.513   10:36:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:08:21.513   10:36:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:08:22.082   10:36:11 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:08:23.019  [2024-11-19 10:36:12.733275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:23.278  [2024-11-19 10:36:12.837220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:23.278  [2024-11-19 10:36:12.837221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:23.278  [2024-11-19 10:36:13.020765] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:08:23.278  [2024-11-19 10:36:13.020826] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:08:25.182   10:36:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1846435 /var/tmp/spdk-nbd.sock
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1846435 ']'
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:25.182  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:08:25.182   10:36:14 event.app_repeat -- event/event.sh@39 -- # killprocess 1846435
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1846435 ']'
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1846435
00:08:25.182    10:36:14 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:25.182    10:36:14 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1846435
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1846435'
00:08:25.182  killing process with pid 1846435
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1846435
00:08:25.182   10:36:14 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1846435
00:08:26.121  spdk_app_start is called in Round 0.
00:08:26.121  Shutdown signal received, stop current app iteration
00:08:26.121  Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization...
00:08:26.121  spdk_app_start is called in Round 1.
00:08:26.121  Shutdown signal received, stop current app iteration
00:08:26.121  Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization...
00:08:26.121  spdk_app_start is called in Round 2.
00:08:26.121  Shutdown signal received, stop current app iteration
00:08:26.121  Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 reinitialization...
00:08:26.121  spdk_app_start is called in Round 3.
00:08:26.121  Shutdown signal received, stop current app iteration
00:08:26.121   10:36:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:08:26.121   10:36:15 event.app_repeat -- event/event.sh@42 -- # return 0
00:08:26.121  
00:08:26.121  real	0m18.725s
00:08:26.121  user	0m39.106s
00:08:26.121  sys	0m3.251s
00:08:26.121   10:36:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:26.121   10:36:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:08:26.121  ************************************
00:08:26.121  END TEST app_repeat
00:08:26.121  ************************************
00:08:26.121   10:36:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:08:26.121   10:36:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/cpu_locks.sh
00:08:26.121   10:36:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:26.121   10:36:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:26.121   10:36:15 event -- common/autotest_common.sh@10 -- # set +x
00:08:26.121  ************************************
00:08:26.121  START TEST cpu_locks
00:08:26.121  ************************************
00:08:26.121   10:36:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/cpu_locks.sh
00:08:26.383  * Looking for test storage...
00:08:26.383  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/event
00:08:26.383    10:36:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:26.384     10:36:15 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:08:26.384     10:36:15 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:26.384    10:36:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:26.384    10:36:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:26.384     10:36:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:08:26.384     10:36:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:08:26.385     10:36:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:26.385     10:36:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:08:26.385    10:36:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:08:26.385     10:36:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:08:26.385     10:36:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:08:26.385     10:36:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:26.385     10:36:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:08:26.385    10:36:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:08:26.385    10:36:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:26.385    10:36:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:26.385    10:36:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:08:26.385    10:36:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:26.385    10:36:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:26.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:26.385  		--rc genhtml_branch_coverage=1
00:08:26.385  		--rc genhtml_function_coverage=1
00:08:26.385  		--rc genhtml_legend=1
00:08:26.385  		--rc geninfo_all_blocks=1
00:08:26.385  		--rc geninfo_unexecuted_blocks=1
00:08:26.385  		
00:08:26.385  		'
00:08:26.385    10:36:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:26.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:26.385  		--rc genhtml_branch_coverage=1
00:08:26.385  		--rc genhtml_function_coverage=1
00:08:26.385  		--rc genhtml_legend=1
00:08:26.385  		--rc geninfo_all_blocks=1
00:08:26.385  		--rc geninfo_unexecuted_blocks=1
00:08:26.385  		
00:08:26.385  		'
00:08:26.385    10:36:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:26.385  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:26.385  		--rc genhtml_branch_coverage=1
00:08:26.385  		--rc genhtml_function_coverage=1
00:08:26.385  		--rc genhtml_legend=1
00:08:26.385  		--rc geninfo_all_blocks=1
00:08:26.385  		--rc geninfo_unexecuted_blocks=1
00:08:26.385  		
00:08:26.385  		'
00:08:26.386    10:36:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:26.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:26.386  		--rc genhtml_branch_coverage=1
00:08:26.386  		--rc genhtml_function_coverage=1
00:08:26.386  		--rc genhtml_legend=1
00:08:26.386  		--rc geninfo_all_blocks=1
00:08:26.386  		--rc geninfo_unexecuted_blocks=1
00:08:26.386  		
00:08:26.386  		'
00:08:26.386   10:36:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:08:26.386   10:36:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:08:26.386   10:36:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:08:26.386   10:36:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:08:26.386   10:36:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:26.386   10:36:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:26.386   10:36:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:26.386  ************************************
00:08:26.386  START TEST default_locks
00:08:26.386  ************************************
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1849748
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1849748
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1849748 ']'
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:26.386   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:26.387   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:26.387  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:26.387   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:26.387   10:36:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:08:26.649  [2024-11-19 10:36:16.217953] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:26.649  [2024-11-19 10:36:16.218053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849748 ]
00:08:26.649  [2024-11-19 10:36:16.352218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:26.908  [2024-11-19 10:36:16.456508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:27.476   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:27.476   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:08:27.476   10:36:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1849748
00:08:27.476   10:36:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1849748
00:08:27.476   10:36:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:28.413  lslocks: write error
00:08:28.413   10:36:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1849748
00:08:28.413   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1849748 ']'
00:08:28.413   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1849748
00:08:28.413    10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:08:28.413   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:28.414    10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1849748
00:08:28.414   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:28.414   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:28.414   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1849748'
00:08:28.414  killing process with pid 1849748
00:08:28.414   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1849748
00:08:28.414   10:36:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1849748
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1849748
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1849748
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:30.951    10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1849748
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1849748 ']'
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:30.951  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:08:30.951  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1849748) - No such process
00:08:30.951  ERROR: process (pid: 1849748) is no longer running
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:30.951   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:08:30.952  
00:08:30.952  real	0m4.075s
00:08:30.952  user	0m4.000s
00:08:30.952  sys	0m0.897s
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:30.952   10:36:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:08:30.952  ************************************
00:08:30.952  END TEST default_locks
00:08:30.952  ************************************
00:08:30.952   10:36:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:08:30.952   10:36:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:30.952   10:36:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:30.952   10:36:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:30.952  ************************************
00:08:30.952  START TEST default_locks_via_rpc
00:08:30.952  ************************************
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1850327
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1850327
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1850327 ']'
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:30.952  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:30.952   10:36:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:30.952  [2024-11-19 10:36:20.371460] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:30.952  [2024-11-19 10:36:20.371567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850327 ]
00:08:30.952  [2024-11-19 10:36:20.508712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:30.952  [2024-11-19 10:36:20.609170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1850327
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:31.890   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1850327
00:08:32.149   10:36:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1850327
00:08:32.149   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1850327 ']'
00:08:32.149   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1850327
00:08:32.149    10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:08:32.149   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:32.149    10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850327
00:08:32.408   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:32.408   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:32.408   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850327'
00:08:32.408  killing process with pid 1850327
00:08:32.408   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1850327
00:08:32.408   10:36:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1850327
00:08:34.970  
00:08:34.970  real	0m3.896s
00:08:34.970  user	0m3.835s
00:08:34.970  sys	0m0.783s
00:08:34.970   10:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:34.970   10:36:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:34.970  ************************************
00:08:34.970  END TEST default_locks_via_rpc
00:08:34.970  ************************************
00:08:34.971   10:36:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:08:34.971   10:36:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:34.971   10:36:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:34.971   10:36:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:34.971  ************************************
00:08:34.971  START TEST non_locking_app_on_locked_coremask
00:08:34.971  ************************************
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1850900
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1850900 /var/tmp/spdk.sock
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1850900 ']'
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:34.971  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:34.971   10:36:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:34.971  [2024-11-19 10:36:24.348896] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:34.971  [2024-11-19 10:36:24.349001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850900 ]
00:08:34.971  [2024-11-19 10:36:24.484678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:34.971  [2024-11-19 10:36:24.586909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1851085
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1851085 /var/tmp/spdk2.sock
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1851085 ']'
00:08:35.906   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:35.907   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:35.907   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:35.907  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:35.907   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:35.907   10:36:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:35.907  [2024-11-19 10:36:25.453923] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:35.907  [2024-11-19 10:36:25.454024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851085 ]
00:08:35.907  [2024-11-19 10:36:25.636365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:35.907  [2024-11-19 10:36:25.636423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:36.166  [2024-11-19 10:36:25.836422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:38.706   10:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:38.706   10:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:38.706   10:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1850900
00:08:38.706   10:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1850900
00:08:38.706   10:36:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:39.354  lslocks: write error
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1850900
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1850900 ']'
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1850900
00:08:39.354    10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:39.354    10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850900
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850900'
00:08:39.354  killing process with pid 1850900
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1850900
00:08:39.354   10:36:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1850900
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1851085
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1851085 ']'
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1851085
00:08:44.626    10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:44.626    10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851085
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851085'
00:08:44.626  killing process with pid 1851085
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1851085
00:08:44.626   10:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1851085
00:08:46.534  
00:08:46.534  real	0m11.675s
00:08:46.534  user	0m11.899s
00:08:46.534  sys	0m1.678s
00:08:46.534   10:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:46.534   10:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:46.534  ************************************
00:08:46.534  END TEST non_locking_app_on_locked_coremask
00:08:46.534  ************************************
00:08:46.534   10:36:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:08:46.534   10:36:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:46.534   10:36:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:46.534   10:36:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:46.534  ************************************
00:08:46.534  START TEST locking_app_on_unlocked_coremask
00:08:46.534  ************************************
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1852536
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1852536 /var/tmp/spdk.sock
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1852536 ']'
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:46.534  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:46.534   10:36:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:46.534  [2024-11-19 10:36:36.105356] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:46.534  [2024-11-19 10:36:36.105457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852536 ]
00:08:46.534  [2024-11-19 10:36:36.238214] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:46.534  [2024-11-19 10:36:36.238259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:46.794  [2024-11-19 10:36:36.340384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1852610
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1852610 /var/tmp/spdk2.sock
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1852610 ']'
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:47.363  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:47.363   10:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:47.622  [2024-11-19 10:36:37.208412] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:47.622  [2024-11-19 10:36:37.208517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852610 ]
00:08:47.622  [2024-11-19 10:36:37.395446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:47.882  [2024-11-19 10:36:37.603323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:50.420   10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:50.420   10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:50.420   10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1852610
00:08:50.420   10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1852610
00:08:50.420   10:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:50.989  lslocks: write error
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1852536
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1852536 ']'
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1852536
00:08:50.989    10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:50.989    10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1852536
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1852536'
00:08:50.989  killing process with pid 1852536
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1852536
00:08:50.989   10:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1852536
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1852610
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1852610 ']'
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1852610
00:08:56.278    10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:56.278    10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1852610
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1852610'
00:08:56.278  killing process with pid 1852610
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1852610
00:08:56.278   10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1852610
00:08:58.185  
00:08:58.185  real	0m11.475s
00:08:58.185  user	0m11.660s
00:08:58.185  sys	0m1.605s
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:58.185  ************************************
00:08:58.185  END TEST locking_app_on_unlocked_coremask
00:08:58.185  ************************************
00:08:58.185   10:36:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:08:58.185   10:36:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:58.185   10:36:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:58.185   10:36:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:58.185  ************************************
00:08:58.185  START TEST locking_app_on_locked_coremask
00:08:58.185  ************************************
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1854085
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1854085 /var/tmp/spdk.sock
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1854085 ']'
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:58.185  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:58.185   10:36:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:58.185  [2024-11-19 10:36:47.660471] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:58.185  [2024-11-19 10:36:47.660580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854085 ]
00:08:58.185  [2024-11-19 10:36:47.797164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:58.185  [2024-11-19 10:36:47.900242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1854267
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1854267 /var/tmp/spdk2.sock
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1854267 /var/tmp/spdk2.sock
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:59.124    10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1854267 /var/tmp/spdk2.sock
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1854267 ']'
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:59.124  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:59.124   10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:59.124  [2024-11-19 10:36:48.763171] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:08:59.124  [2024-11-19 10:36:48.763277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854267 ]
00:08:59.383  [2024-11-19 10:36:48.948306] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1854085 has claimed it.
00:08:59.383  [2024-11-19 10:36:48.948386] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:59.641  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1854267) - No such process
00:08:59.641  ERROR: process (pid: 1854267) is no longer running
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1854085
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1854085
00:08:59.641   10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:00.578  lslocks: write error
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1854085
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1854085 ']'
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1854085
00:09:00.578    10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:00.578    10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854085
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854085'
00:09:00.578  killing process with pid 1854085
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1854085
00:09:00.578   10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1854085
00:09:03.112  
00:09:03.112  real	0m4.774s
00:09:03.112  user	0m4.891s
00:09:03.112  sys	0m1.059s
00:09:03.112   10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:03.112   10:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:03.112  ************************************
00:09:03.112  END TEST locking_app_on_locked_coremask
00:09:03.112  ************************************
00:09:03.112   10:36:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:09:03.112   10:36:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:03.112   10:36:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:03.112   10:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:03.112  ************************************
00:09:03.112  START TEST locking_overlapped_coremask
00:09:03.112  ************************************
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1854821
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1854821 /var/tmp/spdk.sock
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1854821 ']'
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:03.112  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:03.112   10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:09:03.112  [2024-11-19 10:36:52.516172] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:03.112  [2024-11-19 10:36:52.516295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854821 ]
00:09:03.112  [2024-11-19 10:36:52.652267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:03.112  [2024-11-19 10:36:52.758261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:03.112  [2024-11-19 10:36:52.758308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:03.112  [2024-11-19 10:36:52.758311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1854891
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1854891 /var/tmp/spdk2.sock
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1854891 /var/tmp/spdk2.sock
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:04.048    10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1854891 /var/tmp/spdk2.sock
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1854891 ']'
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:04.048  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:04.048   10:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:04.048  [2024-11-19 10:36:53.647306] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:04.048  [2024-11-19 10:36:53.647408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854891 ]
00:09:04.048  [2024-11-19 10:36:53.838689] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1854821 has claimed it.
00:09:04.048  [2024-11-19 10:36:53.838755] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:04.615  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1854891) - No such process
00:09:04.615  ERROR: process (pid: 1854891) is no longer running
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1854821
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1854821 ']'
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1854821
00:09:04.615    10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:04.615    10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854821
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854821'
00:09:04.615  killing process with pid 1854821
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1854821
00:09:04.615   10:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1854821
00:09:07.148  
00:09:07.148  real	0m4.252s
00:09:07.148  user	0m11.554s
00:09:07.148  sys	0m0.768s
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:07.148  ************************************
00:09:07.148  END TEST locking_overlapped_coremask
00:09:07.148  ************************************
00:09:07.148   10:36:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:09:07.148   10:36:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:07.148   10:36:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:07.148   10:36:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:07.148  ************************************
00:09:07.148  START TEST locking_overlapped_coremask_via_rpc
00:09:07.148  ************************************
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1855430
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1855430 /var/tmp/spdk.sock
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1855430 ']'
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:07.148  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:07.148   10:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:07.148  [2024-11-19 10:36:56.847322] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:07.148  [2024-11-19 10:36:56.847429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855430 ]
00:09:07.406  [2024-11-19 10:36:56.984727] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:07.406  [2024-11-19 10:36:56.984779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:07.406  [2024-11-19 10:36:57.088661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:07.406  [2024-11-19 10:36:57.088721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:07.406  [2024-11-19 10:36:57.088729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:08.341   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:08.341   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:08.341   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1855606
00:09:08.341   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1855606 /var/tmp/spdk2.sock
00:09:08.341   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:09:08.341   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1855606 ']'
00:09:08.342   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:08.342   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:08.342   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:08.342  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:08.342   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:08.342   10:36:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:08.342  [2024-11-19 10:36:57.999325] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:08.342  [2024-11-19 10:36:57.999431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855606 ]
00:09:08.600  [2024-11-19 10:36:58.186529] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:08.600  [2024-11-19 10:36:58.186581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:08.858  [2024-11-19 10:36:58.416390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:08.858  [2024-11-19 10:36:58.416415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:08.859  [2024-11-19 10:36:58.416448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:10.762    10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:10.762  [2024-11-19 10:37:00.532894] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1855430 has claimed it.
00:09:10.762  request:
00:09:10.762  {
00:09:10.762  "method": "framework_enable_cpumask_locks",
00:09:10.762  "req_id": 1
00:09:10.762  }
00:09:10.762  Got JSON-RPC error response
00:09:10.762  response:
00:09:10.762  {
00:09:10.762  "code": -32603,
00:09:10.762  "message": "Failed to claim CPU core: 2"
00:09:10.762  }
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1855430 /var/tmp/spdk.sock
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1855430 ']'
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:10.762  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:10.762   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1855606 /var/tmp/spdk2.sock
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1855606 ']'
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:11.021  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:11.021   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:11.280  
00:09:11.280  real	0m4.221s
00:09:11.280  user	0m1.128s
00:09:11.280  sys	0m0.267s
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:11.280   10:37:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:11.280  ************************************
00:09:11.280  END TEST locking_overlapped_coremask_via_rpc
00:09:11.280  ************************************
00:09:11.280   10:37:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:09:11.280   10:37:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1855430 ]]
00:09:11.280   10:37:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1855430
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1855430 ']'
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1855430
00:09:11.280    10:37:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:11.280    10:37:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855430
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855430'
00:09:11.280  killing process with pid 1855430
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1855430
00:09:11.280   10:37:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1855430
00:09:13.812   10:37:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1855606 ]]
00:09:13.812   10:37:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1855606
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1855606 ']'
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1855606
00:09:13.812    10:37:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:13.812    10:37:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855606
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855606'
00:09:13.812  killing process with pid 1855606
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1855606
00:09:13.812   10:37:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1855606
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1855430 ]]
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1855430
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1855430 ']'
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1855430
00:09:16.445  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1855430) - No such process
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1855430 is not found'
00:09:16.445  Process with pid 1855430 is not found
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1855606 ]]
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1855606
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1855606 ']'
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1855606
00:09:16.445  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1855606) - No such process
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1855606 is not found'
00:09:16.445  Process with pid 1855606 is not found
00:09:16.445   10:37:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:09:16.445  
00:09:16.445  real	0m49.989s
00:09:16.445  user	1m24.468s
00:09:16.445  sys	0m8.518s
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:16.445   10:37:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:16.445  ************************************
00:09:16.445  END TEST cpu_locks
00:09:16.445  ************************************
00:09:16.445  
00:09:16.445  real	1m19.578s
00:09:16.445  user	2m20.710s
00:09:16.445  sys	0m13.338s
00:09:16.445   10:37:05 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:16.445   10:37:05 event -- common/autotest_common.sh@10 -- # set +x
00:09:16.445  ************************************
00:09:16.445  END TEST event
00:09:16.445  ************************************
00:09:16.445   10:37:05  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread/thread.sh
00:09:16.445   10:37:05  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:16.445   10:37:05  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:16.445   10:37:05  -- common/autotest_common.sh@10 -- # set +x
00:09:16.445  ************************************
00:09:16.445  START TEST thread
00:09:16.445  ************************************
00:09:16.445   10:37:06 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread/thread.sh
00:09:16.445  * Looking for test storage...
00:09:16.445  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread
00:09:16.445    10:37:06 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:16.445     10:37:06 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:09:16.445     10:37:06 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:16.445    10:37:06 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:16.445    10:37:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:16.445    10:37:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:16.445    10:37:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:16.445    10:37:06 thread -- scripts/common.sh@336 -- # IFS=.-:
00:09:16.445    10:37:06 thread -- scripts/common.sh@336 -- # read -ra ver1
00:09:16.445    10:37:06 thread -- scripts/common.sh@337 -- # IFS=.-:
00:09:16.445    10:37:06 thread -- scripts/common.sh@337 -- # read -ra ver2
00:09:16.445    10:37:06 thread -- scripts/common.sh@338 -- # local 'op=<'
00:09:16.445    10:37:06 thread -- scripts/common.sh@340 -- # ver1_l=2
00:09:16.445    10:37:06 thread -- scripts/common.sh@341 -- # ver2_l=1
00:09:16.445    10:37:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:16.445    10:37:06 thread -- scripts/common.sh@344 -- # case "$op" in
00:09:16.445    10:37:06 thread -- scripts/common.sh@345 -- # : 1
00:09:16.445    10:37:06 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:16.445    10:37:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:16.445     10:37:06 thread -- scripts/common.sh@365 -- # decimal 1
00:09:16.445     10:37:06 thread -- scripts/common.sh@353 -- # local d=1
00:09:16.445     10:37:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:16.445     10:37:06 thread -- scripts/common.sh@355 -- # echo 1
00:09:16.445    10:37:06 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:09:16.445     10:37:06 thread -- scripts/common.sh@366 -- # decimal 2
00:09:16.445     10:37:06 thread -- scripts/common.sh@353 -- # local d=2
00:09:16.445     10:37:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:16.445     10:37:06 thread -- scripts/common.sh@355 -- # echo 2
00:09:16.445    10:37:06 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:09:16.446    10:37:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:16.446    10:37:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:16.446    10:37:06 thread -- scripts/common.sh@368 -- # return 0
00:09:16.446    10:37:06 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:16.446    10:37:06 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:16.446  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.446  		--rc genhtml_branch_coverage=1
00:09:16.446  		--rc genhtml_function_coverage=1
00:09:16.446  		--rc genhtml_legend=1
00:09:16.446  		--rc geninfo_all_blocks=1
00:09:16.446  		--rc geninfo_unexecuted_blocks=1
00:09:16.446  		
00:09:16.446  		'
00:09:16.446    10:37:06 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:16.446  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.446  		--rc genhtml_branch_coverage=1
00:09:16.446  		--rc genhtml_function_coverage=1
00:09:16.446  		--rc genhtml_legend=1
00:09:16.446  		--rc geninfo_all_blocks=1
00:09:16.446  		--rc geninfo_unexecuted_blocks=1
00:09:16.446  		
00:09:16.446  		'
00:09:16.446    10:37:06 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:16.446  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.446  		--rc genhtml_branch_coverage=1
00:09:16.446  		--rc genhtml_function_coverage=1
00:09:16.446  		--rc genhtml_legend=1
00:09:16.446  		--rc geninfo_all_blocks=1
00:09:16.446  		--rc geninfo_unexecuted_blocks=1
00:09:16.446  		
00:09:16.446  		'
00:09:16.446    10:37:06 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:16.446  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:16.446  		--rc genhtml_branch_coverage=1
00:09:16.446  		--rc genhtml_function_coverage=1
00:09:16.446  		--rc genhtml_legend=1
00:09:16.446  		--rc geninfo_all_blocks=1
00:09:16.446  		--rc geninfo_unexecuted_blocks=1
00:09:16.446  		
00:09:16.446  		'
00:09:16.446   10:37:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:16.446   10:37:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:09:16.446   10:37:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:16.446   10:37:06 thread -- common/autotest_common.sh@10 -- # set +x
00:09:16.446  ************************************
00:09:16.446  START TEST thread_poller_perf
00:09:16.446  ************************************
00:09:16.446   10:37:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:16.705  [2024-11-19 10:37:06.241042] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:16.705  [2024-11-19 10:37:06.241135] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856822 ]
00:09:16.705  [2024-11-19 10:37:06.375256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:16.705  [2024-11-19 10:37:06.485109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:16.705  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:09:18.081  
[2024-11-19T09:37:07.873Z]  ======================================
00:09:18.081  
[2024-11-19T09:37:07.873Z]  busy:2307025808 (cyc)
00:09:18.081  
[2024-11-19T09:37:07.873Z]  total_run_count: 407000
00:09:18.081  
[2024-11-19T09:37:07.873Z]  tsc_hz: 2300000000 (cyc)
00:09:18.081  
[2024-11-19T09:37:07.873Z]  ======================================
00:09:18.081  
[2024-11-19T09:37:07.873Z]  poller_cost: 5668 (cyc), 2464 (nsec)
00:09:18.081  
00:09:18.081  real	0m1.506s
00:09:18.081  user	0m1.358s
00:09:18.081  sys	0m0.141s
00:09:18.081   10:37:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:18.081   10:37:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:09:18.081  ************************************
00:09:18.081  END TEST thread_poller_perf
00:09:18.081  ************************************
00:09:18.081   10:37:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:18.081   10:37:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:09:18.081   10:37:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:18.081   10:37:07 thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.081  ************************************
00:09:18.081  START TEST thread_poller_perf
00:09:18.081  ************************************
00:09:18.081   10:37:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:18.081  [2024-11-19 10:37:07.804677] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:18.081  [2024-11-19 10:37:07.804785] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857026 ]
00:09:18.340  [2024-11-19 10:37:07.939094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:18.340  [2024-11-19 10:37:08.041559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:18.340  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:09:19.716  
[2024-11-19T09:37:09.508Z]  ======================================
00:09:19.716  
[2024-11-19T09:37:09.508Z]  busy:2302832640 (cyc)
00:09:19.716  
[2024-11-19T09:37:09.508Z]  total_run_count: 5296000
00:09:19.716  
[2024-11-19T09:37:09.508Z]  tsc_hz: 2300000000 (cyc)
00:09:19.716  
[2024-11-19T09:37:09.508Z]  ======================================
00:09:19.716  
[2024-11-19T09:37:09.508Z]  poller_cost: 434 (cyc), 188 (nsec)
00:09:19.716  
00:09:19.716  real	0m1.491s
00:09:19.716  user	0m1.333s
00:09:19.716  sys	0m0.151s
00:09:19.716   10:37:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:19.716   10:37:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:09:19.716  ************************************
00:09:19.716  END TEST thread_poller_perf
00:09:19.716  ************************************
00:09:19.716   10:37:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:09:19.716  
00:09:19.716  real	0m3.289s
00:09:19.716  user	0m2.836s
00:09:19.716  sys	0m0.465s
00:09:19.716   10:37:09 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:19.716   10:37:09 thread -- common/autotest_common.sh@10 -- # set +x
00:09:19.716  ************************************
00:09:19.716  END TEST thread
00:09:19.716  ************************************
00:09:19.716   10:37:09  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:09:19.716   10:37:09  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vhost-phy-autotest/spdk/test/app/cmdline.sh
00:09:19.716   10:37:09  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:19.716   10:37:09  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:19.716   10:37:09  -- common/autotest_common.sh@10 -- # set +x
00:09:19.716  ************************************
00:09:19.716  START TEST app_cmdline
00:09:19.716  ************************************
00:09:19.716   10:37:09 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/app/cmdline.sh
00:09:19.716  * Looking for test storage...
00:09:19.716  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/app
00:09:19.716    10:37:09 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:19.716     10:37:09 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:09:19.716     10:37:09 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:19.975    10:37:09 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@345 -- # : 1
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:19.975     10:37:09 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:19.975    10:37:09 app_cmdline -- scripts/common.sh@368 -- # return 0
00:09:19.975    10:37:09 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:19.975    10:37:09 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:19.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.975  		--rc genhtml_branch_coverage=1
00:09:19.975  		--rc genhtml_function_coverage=1
00:09:19.975  		--rc genhtml_legend=1
00:09:19.975  		--rc geninfo_all_blocks=1
00:09:19.975  		--rc geninfo_unexecuted_blocks=1
00:09:19.975  		
00:09:19.975  		'
00:09:19.975    10:37:09 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:19.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.975  		--rc genhtml_branch_coverage=1
00:09:19.975  		--rc genhtml_function_coverage=1
00:09:19.975  		--rc genhtml_legend=1
00:09:19.975  		--rc geninfo_all_blocks=1
00:09:19.975  		--rc geninfo_unexecuted_blocks=1
00:09:19.975  		
00:09:19.975  		'
00:09:19.975    10:37:09 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:19.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.975  		--rc genhtml_branch_coverage=1
00:09:19.975  		--rc genhtml_function_coverage=1
00:09:19.975  		--rc genhtml_legend=1
00:09:19.976  		--rc geninfo_all_blocks=1
00:09:19.976  		--rc geninfo_unexecuted_blocks=1
00:09:19.976  		
00:09:19.976  		'
00:09:19.976    10:37:09 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:19.976  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:19.976  		--rc genhtml_branch_coverage=1
00:09:19.976  		--rc genhtml_function_coverage=1
00:09:19.976  		--rc genhtml_legend=1
00:09:19.976  		--rc geninfo_all_blocks=1
00:09:19.976  		--rc geninfo_unexecuted_blocks=1
00:09:19.976  		
00:09:19.976  		'
00:09:19.976   10:37:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:09:19.976   10:37:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1857283
00:09:19.976   10:37:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1857283
00:09:19.976   10:37:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:09:19.976   10:37:09 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1857283 ']'
00:09:19.976   10:37:09 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:19.976   10:37:09 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:19.976   10:37:09 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:19.976  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:19.976   10:37:09 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:19.976   10:37:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:09:19.976  [2024-11-19 10:37:09.635850] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:19.976  [2024-11-19 10:37:09.635952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857283 ]
00:09:20.237  [2024-11-19 10:37:09.769877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:20.237  [2024-11-19 10:37:09.871699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:21.173   10:37:10 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:21.173   10:37:10 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:09:21.173   10:37:10 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:09:21.173  {
00:09:21.173    "version": "SPDK v25.01-pre git sha1 a0c128549",
00:09:21.173    "fields": {
00:09:21.173      "major": 25,
00:09:21.173      "minor": 1,
00:09:21.173      "patch": 0,
00:09:21.173      "suffix": "-pre",
00:09:21.174      "commit": "a0c128549"
00:09:21.174    }
00:09:21.174  }
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:09:21.174    10:37:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:09:21.174    10:37:10 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.174    10:37:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:09:21.174    10:37:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:09:21.174    10:37:10 app_cmdline -- app/cmdline.sh@26 -- # sort
00:09:21.174    10:37:10 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:09:21.174   10:37:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:21.174    10:37:10 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:21.174    10:37:10 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py ]]
00:09:21.174   10:37:10 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:09:21.432  request:
00:09:21.432  {
00:09:21.432    "method": "env_dpdk_get_mem_stats",
00:09:21.432    "req_id": 1
00:09:21.432  }
00:09:21.432  Got JSON-RPC error response
00:09:21.432  response:
00:09:21.432  {
00:09:21.432    "code": -32601,
00:09:21.432    "message": "Method not found"
00:09:21.432  }
00:09:21.432   10:37:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:09:21.432   10:37:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:21.433   10:37:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1857283
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1857283 ']'
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1857283
00:09:21.433    10:37:11 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:21.433    10:37:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1857283
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1857283'
00:09:21.433  killing process with pid 1857283
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@973 -- # kill 1857283
00:09:21.433   10:37:11 app_cmdline -- common/autotest_common.sh@978 -- # wait 1857283
00:09:23.967  
00:09:23.967  real	0m4.051s
00:09:23.967  user	0m4.210s
00:09:23.967  sys	0m0.693s
00:09:23.967   10:37:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:23.967   10:37:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:09:23.967  ************************************
00:09:23.967  END TEST app_cmdline
00:09:23.967  ************************************
00:09:23.967   10:37:13  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vhost-phy-autotest/spdk/test/app/version.sh
00:09:23.967   10:37:13  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:23.967   10:37:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:23.967   10:37:13  -- common/autotest_common.sh@10 -- # set +x
00:09:23.967  ************************************
00:09:23.967  START TEST version
00:09:23.967  ************************************
00:09:23.967   10:37:13 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/app/version.sh
00:09:23.967  * Looking for test storage...
00:09:23.967  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/app
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:23.967     10:37:13 version -- common/autotest_common.sh@1693 -- # lcov --version
00:09:23.967     10:37:13 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:23.967    10:37:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:23.967    10:37:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:23.967    10:37:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:23.967    10:37:13 version -- scripts/common.sh@336 -- # IFS=.-:
00:09:23.967    10:37:13 version -- scripts/common.sh@336 -- # read -ra ver1
00:09:23.967    10:37:13 version -- scripts/common.sh@337 -- # IFS=.-:
00:09:23.967    10:37:13 version -- scripts/common.sh@337 -- # read -ra ver2
00:09:23.967    10:37:13 version -- scripts/common.sh@338 -- # local 'op=<'
00:09:23.967    10:37:13 version -- scripts/common.sh@340 -- # ver1_l=2
00:09:23.967    10:37:13 version -- scripts/common.sh@341 -- # ver2_l=1
00:09:23.967    10:37:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:23.967    10:37:13 version -- scripts/common.sh@344 -- # case "$op" in
00:09:23.967    10:37:13 version -- scripts/common.sh@345 -- # : 1
00:09:23.967    10:37:13 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:23.967    10:37:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:23.967     10:37:13 version -- scripts/common.sh@365 -- # decimal 1
00:09:23.967     10:37:13 version -- scripts/common.sh@353 -- # local d=1
00:09:23.967     10:37:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:23.967     10:37:13 version -- scripts/common.sh@355 -- # echo 1
00:09:23.967    10:37:13 version -- scripts/common.sh@365 -- # ver1[v]=1
00:09:23.967     10:37:13 version -- scripts/common.sh@366 -- # decimal 2
00:09:23.967     10:37:13 version -- scripts/common.sh@353 -- # local d=2
00:09:23.967     10:37:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:23.967     10:37:13 version -- scripts/common.sh@355 -- # echo 2
00:09:23.967    10:37:13 version -- scripts/common.sh@366 -- # ver2[v]=2
00:09:23.967    10:37:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:23.967    10:37:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:23.967    10:37:13 version -- scripts/common.sh@368 -- # return 0
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:23.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:23.967  		--rc genhtml_branch_coverage=1
00:09:23.967  		--rc genhtml_function_coverage=1
00:09:23.967  		--rc genhtml_legend=1
00:09:23.967  		--rc geninfo_all_blocks=1
00:09:23.967  		--rc geninfo_unexecuted_blocks=1
00:09:23.967  		
00:09:23.967  		'
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:23.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:23.967  		--rc genhtml_branch_coverage=1
00:09:23.967  		--rc genhtml_function_coverage=1
00:09:23.967  		--rc genhtml_legend=1
00:09:23.967  		--rc geninfo_all_blocks=1
00:09:23.967  		--rc geninfo_unexecuted_blocks=1
00:09:23.967  		
00:09:23.967  		'
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:23.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:23.967  		--rc genhtml_branch_coverage=1
00:09:23.967  		--rc genhtml_function_coverage=1
00:09:23.967  		--rc genhtml_legend=1
00:09:23.967  		--rc geninfo_all_blocks=1
00:09:23.967  		--rc geninfo_unexecuted_blocks=1
00:09:23.967  		
00:09:23.967  		'
00:09:23.967    10:37:13 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:23.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:23.967  		--rc genhtml_branch_coverage=1
00:09:23.967  		--rc genhtml_function_coverage=1
00:09:23.967  		--rc genhtml_legend=1
00:09:23.967  		--rc geninfo_all_blocks=1
00:09:23.967  		--rc geninfo_unexecuted_blocks=1
00:09:23.967  		
00:09:23.967  		'
00:09:23.967    10:37:13 version -- app/version.sh@17 -- # get_header_version major
00:09:23.967    10:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vhost-phy-autotest/spdk/include/spdk/version.h
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # cut -f2
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # tr -d '"'
00:09:23.967   10:37:13 version -- app/version.sh@17 -- # major=25
00:09:23.967    10:37:13 version -- app/version.sh@18 -- # get_header_version minor
00:09:23.967    10:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vhost-phy-autotest/spdk/include/spdk/version.h
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # cut -f2
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # tr -d '"'
00:09:23.967   10:37:13 version -- app/version.sh@18 -- # minor=1
00:09:23.967    10:37:13 version -- app/version.sh@19 -- # get_header_version patch
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # tr -d '"'
00:09:23.967    10:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vhost-phy-autotest/spdk/include/spdk/version.h
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # cut -f2
00:09:23.967   10:37:13 version -- app/version.sh@19 -- # patch=0
00:09:23.967    10:37:13 version -- app/version.sh@20 -- # get_header_version suffix
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # cut -f2
00:09:23.967    10:37:13 version -- app/version.sh@14 -- # tr -d '"'
00:09:23.967    10:37:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vhost-phy-autotest/spdk/include/spdk/version.h
00:09:23.967   10:37:13 version -- app/version.sh@20 -- # suffix=-pre
00:09:23.967   10:37:13 version -- app/version.sh@22 -- # version=25.1
00:09:23.967   10:37:13 version -- app/version.sh@25 -- # (( patch != 0 ))
00:09:23.967   10:37:13 version -- app/version.sh@28 -- # version=25.1rc0
00:09:23.968   10:37:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python
00:09:23.968    10:37:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:09:24.227   10:37:13 version -- app/version.sh@30 -- # py_version=25.1rc0
00:09:24.227   10:37:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:09:24.227  
00:09:24.227  real	0m0.272s
00:09:24.227  user	0m0.158s
00:09:24.227  sys	0m0.162s
00:09:24.227   10:37:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:24.227   10:37:13 version -- common/autotest_common.sh@10 -- # set +x
00:09:24.227  ************************************
00:09:24.227  END TEST version
00:09:24.227  ************************************
00:09:24.227   10:37:13  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:09:24.227    10:37:13  -- spdk/autotest.sh@194 -- # uname -s
00:09:24.227   10:37:13  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:09:24.227   10:37:13  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:09:24.227   10:37:13  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:09:24.227   10:37:13  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@260 -- # timing_exit lib
00:09:24.227   10:37:13  -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:24.227   10:37:13  -- common/autotest_common.sh@10 -- # set +x
00:09:24.227   10:37:13  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@311 -- # '[' 1 -eq 1 ']'
00:09:24.227   10:37:13  -- spdk/autotest.sh@312 -- # HUGENODE=0
00:09:24.227   10:37:13  -- spdk/autotest.sh@312 -- # run_test vhost /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost.sh --iso
00:09:24.227   10:37:13  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:24.227   10:37:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:24.227   10:37:13  -- common/autotest_common.sh@10 -- # set +x
00:09:24.227  ************************************
00:09:24.227  START TEST vhost
00:09:24.227  ************************************
00:09:24.227   10:37:13 vhost -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost.sh --iso
00:09:24.227  * Looking for test storage...
00:09:24.227  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost
00:09:24.227    10:37:13 vhost -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:24.227     10:37:13 vhost -- common/autotest_common.sh@1693 -- # lcov --version
00:09:24.227     10:37:13 vhost -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:24.487    10:37:14 vhost -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:24.487    10:37:14 vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:24.487    10:37:14 vhost -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:24.487    10:37:14 vhost -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:24.487    10:37:14 vhost -- scripts/common.sh@336 -- # IFS=.-:
00:09:24.487    10:37:14 vhost -- scripts/common.sh@336 -- # read -ra ver1
00:09:24.487    10:37:14 vhost -- scripts/common.sh@337 -- # IFS=.-:
00:09:24.487    10:37:14 vhost -- scripts/common.sh@337 -- # read -ra ver2
00:09:24.487    10:37:14 vhost -- scripts/common.sh@338 -- # local 'op=<'
00:09:24.487    10:37:14 vhost -- scripts/common.sh@340 -- # ver1_l=2
00:09:24.487    10:37:14 vhost -- scripts/common.sh@341 -- # ver2_l=1
00:09:24.487    10:37:14 vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:24.487    10:37:14 vhost -- scripts/common.sh@344 -- # case "$op" in
00:09:24.487    10:37:14 vhost -- scripts/common.sh@345 -- # : 1
00:09:24.487    10:37:14 vhost -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:24.487    10:37:14 vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:24.487     10:37:14 vhost -- scripts/common.sh@365 -- # decimal 1
00:09:24.487     10:37:14 vhost -- scripts/common.sh@353 -- # local d=1
00:09:24.487     10:37:14 vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:24.487     10:37:14 vhost -- scripts/common.sh@355 -- # echo 1
00:09:24.487    10:37:14 vhost -- scripts/common.sh@365 -- # ver1[v]=1
00:09:24.487     10:37:14 vhost -- scripts/common.sh@366 -- # decimal 2
00:09:24.487     10:37:14 vhost -- scripts/common.sh@353 -- # local d=2
00:09:24.487     10:37:14 vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:24.487     10:37:14 vhost -- scripts/common.sh@355 -- # echo 2
00:09:24.487    10:37:14 vhost -- scripts/common.sh@366 -- # ver2[v]=2
00:09:24.487    10:37:14 vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:24.487    10:37:14 vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:24.487    10:37:14 vhost -- scripts/common.sh@368 -- # return 0
00:09:24.487    10:37:14 vhost -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:24.487    10:37:14 vhost -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:24.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:24.487  		--rc genhtml_branch_coverage=1
00:09:24.487  		--rc genhtml_function_coverage=1
00:09:24.487  		--rc genhtml_legend=1
00:09:24.487  		--rc geninfo_all_blocks=1
00:09:24.487  		--rc geninfo_unexecuted_blocks=1
00:09:24.487  		
00:09:24.487  		'
00:09:24.487    10:37:14 vhost -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:24.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:24.487  		--rc genhtml_branch_coverage=1
00:09:24.487  		--rc genhtml_function_coverage=1
00:09:24.487  		--rc genhtml_legend=1
00:09:24.487  		--rc geninfo_all_blocks=1
00:09:24.487  		--rc geninfo_unexecuted_blocks=1
00:09:24.487  		
00:09:24.487  		'
00:09:24.487    10:37:14 vhost -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:24.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:24.487  		--rc genhtml_branch_coverage=1
00:09:24.487  		--rc genhtml_function_coverage=1
00:09:24.487  		--rc genhtml_legend=1
00:09:24.487  		--rc geninfo_all_blocks=1
00:09:24.487  		--rc geninfo_unexecuted_blocks=1
00:09:24.487  		
00:09:24.487  		'
00:09:24.487    10:37:14 vhost -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:24.487  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:24.487  		--rc genhtml_branch_coverage=1
00:09:24.487  		--rc genhtml_function_coverage=1
00:09:24.487  		--rc genhtml_legend=1
00:09:24.487  		--rc geninfo_all_blocks=1
00:09:24.487  		--rc geninfo_unexecuted_blocks=1
00:09:24.487  		
00:09:24.487  		'
00:09:24.487   10:37:14 vhost -- vhost/vhost.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh
00:09:24.487    10:37:14 vhost -- vhost/common.sh@6 -- # : false
00:09:24.487    10:37:14 vhost -- vhost/common.sh@7 -- # : /root/vhost_test
00:09:24.487    10:37:14 vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:24.487    10:37:14 vhost -- vhost/common.sh@9 -- # : qemu-img
00:09:24.487     10:37:14 vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/..
00:09:24.487    10:37:14 vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vhost-phy-autotest
00:09:24.487    10:37:14 vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:09:24.487    10:37:14 vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:09:24.487    10:37:14 vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:09:24.487    10:37:14 vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:24.487    10:37:14 vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:09:24.487      10:37:14 vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost.sh
00:09:24.487     10:37:14 vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost
00:09:24.487    10:37:14 vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost
00:09:24.487    10:37:14 vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:24.487    10:37:14 vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:09:24.487    10:37:14 vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:09:24.487    10:37:14 vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:09:24.487    10:37:14 vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/autotest.config
00:09:24.487     10:37:14 vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:09:24.487     10:37:14 vhost -- common/autotest.config@2 -- # vhost_0_main_core=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:09:24.487     10:37:14 vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:09:24.487     10:37:14 vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:09:24.487     10:37:14 vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:09:24.487     10:37:14 vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:09:24.487     10:37:14 vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:09:24.487     10:37:14 vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:09:24.487     10:37:14 vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:09:24.487     10:37:14 vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:09:24.487     10:37:14 vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:09:24.487     10:37:14 vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:09:24.487     10:37:14 vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:09:24.487     10:37:14 vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:09:24.487     10:37:14 vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:09:24.487     10:37:14 vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:09:24.487     10:37:14 vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:09:24.487     10:37:14 vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:09:24.487     10:37:14 vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:09:24.487     10:37:14 vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:09:24.487    10:37:14 vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/common.sh
00:09:24.487     10:37:14 vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:09:24.487     10:37:14 vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:09:24.487     10:37:14 vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:09:24.487     10:37:14 vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler
00:09:24.487     10:37:14 vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:09:24.487     10:37:14 vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/cgroups.sh
00:09:24.487      10:37:14 vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:09:24.487       10:37:14 vhost -- scheduler/cgroups.sh@244 -- # check_cgroup
00:09:24.487       10:37:14 vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:09:24.487       10:37:14 vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:09:24.487       10:37:14 vhost -- scheduler/cgroups.sh@10 -- # echo 2
00:09:24.487      10:37:14 vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:09:24.487   10:37:14 vhost -- vhost/vhost.sh@11 -- # echo 'Running SPDK vhost fio autotest...'
00:09:24.487  Running SPDK vhost fio autotest...
00:09:24.487    10:37:14 vhost -- vhost/vhost.sh@12 -- # uname -s
00:09:24.487   10:37:14 vhost -- vhost/vhost.sh@12 -- # [[ Linux != Linux ]]
00:09:24.487   10:37:14 vhost -- vhost/vhost.sh@19 -- # vhosttestinit
00:09:24.487   10:37:14 vhost -- vhost/common.sh@37 -- # '[' iso == iso ']'
00:09:24.487   10:37:14 vhost -- vhost/common.sh@38 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/setup.sh
00:09:27.769  0000:5e:00.0 (144d a80a): Already using the vfio-pci driver
00:09:27.769  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:af:00.0 (8086 2701): Already using the vfio-pci driver
00:09:27.769  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:b0:00.0 (8086 2701): Already using the vfio-pci driver
00:09:27.769  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:09:27.769  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:09:28.027   10:37:17 vhost -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:09:28.027   10:37:17 vhost -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:28.027   10:37:17 vhost -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:28.027   10:37:17 vhost -- vhost/vhost.sh@21 -- # run_test vhost_negative /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other/negative.sh
00:09:28.027   10:37:17 vhost -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:28.027   10:37:17 vhost -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:28.027   10:37:17 vhost -- common/autotest_common.sh@10 -- # set +x
00:09:28.027  ************************************
00:09:28.027  START TEST vhost_negative
00:09:28.027  ************************************
00:09:28.027   10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other/negative.sh
00:09:28.027  * Looking for test storage...
00:09:28.027  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other
00:09:28.027    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:28.027     10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1693 -- # lcov --version
00:09:28.027     10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:28.287    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@336 -- # IFS=.-:
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@336 -- # read -ra ver1
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@337 -- # IFS=.-:
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@337 -- # read -ra ver2
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@338 -- # local 'op=<'
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@340 -- # ver1_l=2
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@341 -- # ver2_l=1
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@344 -- # case "$op" in
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@345 -- # : 1
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:28.287     10:37:17 vhost.vhost_negative -- scripts/common.sh@365 -- # decimal 1
00:09:28.287     10:37:17 vhost.vhost_negative -- scripts/common.sh@353 -- # local d=1
00:09:28.287     10:37:17 vhost.vhost_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.287     10:37:17 vhost.vhost_negative -- scripts/common.sh@355 -- # echo 1
00:09:28.287    10:37:17 vhost.vhost_negative -- scripts/common.sh@365 -- # ver1[v]=1
00:09:28.287     10:37:17 vhost.vhost_negative -- scripts/common.sh@366 -- # decimal 2
00:09:28.287     10:37:17 vhost.vhost_negative -- scripts/common.sh@353 -- # local d=2
00:09:28.288     10:37:17 vhost.vhost_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:28.288     10:37:17 vhost.vhost_negative -- scripts/common.sh@355 -- # echo 2
00:09:28.288    10:37:17 vhost.vhost_negative -- scripts/common.sh@366 -- # ver2[v]=2
00:09:28.288    10:37:17 vhost.vhost_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:28.288    10:37:17 vhost.vhost_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:28.288    10:37:17 vhost.vhost_negative -- scripts/common.sh@368 -- # return 0
00:09:28.288    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:28.288    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:28.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.288  		--rc genhtml_branch_coverage=1
00:09:28.288  		--rc genhtml_function_coverage=1
00:09:28.288  		--rc genhtml_legend=1
00:09:28.288  		--rc geninfo_all_blocks=1
00:09:28.288  		--rc geninfo_unexecuted_blocks=1
00:09:28.288  		
00:09:28.288  		'
00:09:28.288    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:28.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.288  		--rc genhtml_branch_coverage=1
00:09:28.288  		--rc genhtml_function_coverage=1
00:09:28.288  		--rc genhtml_legend=1
00:09:28.288  		--rc geninfo_all_blocks=1
00:09:28.288  		--rc geninfo_unexecuted_blocks=1
00:09:28.288  		
00:09:28.288  		'
00:09:28.288    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:28.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.288  		--rc genhtml_branch_coverage=1
00:09:28.288  		--rc genhtml_function_coverage=1
00:09:28.288  		--rc genhtml_legend=1
00:09:28.288  		--rc geninfo_all_blocks=1
00:09:28.288  		--rc geninfo_unexecuted_blocks=1
00:09:28.288  		
00:09:28.288  		'
00:09:28.288    10:37:17 vhost.vhost_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:28.288  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.288  		--rc genhtml_branch_coverage=1
00:09:28.288  		--rc genhtml_function_coverage=1
00:09:28.288  		--rc genhtml_legend=1
00:09:28.288  		--rc geninfo_all_blocks=1
00:09:28.288  		--rc geninfo_unexecuted_blocks=1
00:09:28.288  		
00:09:28.288  		'
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@10 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@6 -- # : false
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@7 -- # : /root/vhost_test
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@9 -- # : qemu-img
00:09:28.288     10:37:17 vhost.vhost_negative -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/..
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vhost-phy-autotest
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:09:28.288      10:37:17 vhost.vhost_negative -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other/negative.sh
00:09:28.288     10:37:17 vhost.vhost_negative -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/autotest.config
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@2 -- # vhost_0_main_core=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:09:28.288     10:37:17 vhost.vhost_negative -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:09:28.288    10:37:17 vhost.vhost_negative -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/common.sh
00:09:28.288     10:37:17 vhost.vhost_negative -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:09:28.288     10:37:17 vhost.vhost_negative -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:09:28.288     10:37:17 vhost.vhost_negative -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:09:28.288     10:37:17 vhost.vhost_negative -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler
00:09:28.288     10:37:17 vhost.vhost_negative -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:09:28.288     10:37:17 vhost.vhost_negative -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/cgroups.sh
00:09:28.288      10:37:17 vhost.vhost_negative -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:09:28.288       10:37:17 vhost.vhost_negative -- scheduler/cgroups.sh@244 -- # check_cgroup
00:09:28.288       10:37:17 vhost.vhost_negative -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:09:28.288       10:37:17 vhost.vhost_negative -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:09:28.288       10:37:17 vhost.vhost_negative -- scheduler/cgroups.sh@10 -- # echo 2
00:09:28.288      10:37:17 vhost.vhost_negative -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@25 -- # run_in_background=false
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@26 -- # getopts xh-: optchar
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@41 -- # vhosttestinit
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@43 -- # trap error_exit ERR
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@45 -- # notice 'Testing vhost command line arguments'
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Testing vhost command line arguments'
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:28.288   10:37:17 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Testing vhost command line arguments'
00:09:28.288  INFO: Testing vhost command line arguments
00:09:28.288   10:37:17 vhost.vhost_negative -- other/negative.sh@47 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -c /path/to/non_existing_file/conf -S /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/other -e 0x0 -s 1024 -d -h --silence-noticelog
00:09:28.288  /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost [options]
00:09:28.288  
00:09:28.288  CPU options:
00:09:28.288   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:09:28.288                                   (like [0,1,10])
00:09:28.288       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:09:28.288                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:09:28.288                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:09:28.288                             Within the group, '-' is used for range separator,
00:09:28.288                             ',' is used for single number separator.
00:09:28.288                             '( )' can be omitted for single element group,
00:09:28.288                             '@' can be omitted if cpus and lcores have the same value
00:09:28.288       --disable-cpumask-locks    Disable CPU core lock files.
00:09:28.288       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:09:28.288                             pollers in the app support interrupt mode)
00:09:28.288   -p, --main-core <id>      main (primary) core for DPDK
00:09:28.289  
00:09:28.289  Configuration options:
00:09:28.289   -c, --config, --json  <config>     JSON config file
00:09:28.289   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:09:28.289       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:09:28.289       --wait-for-rpc        wait for RPCs to initialize subsystems
00:09:28.289       --rpcs-allowed	   comma-separated list of permitted RPCS
00:09:28.289       --json-ignore-init-errors    don't exit on invalid config entry
00:09:28.289  
00:09:28.289  Memory options:
00:09:28.289       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:09:28.289       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:09:28.289       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:09:28.289   -R, --huge-unlink         unlink huge files after initialization
00:09:28.289   -n, --mem-channels <num>  number of memory channels used for DPDK
00:09:28.289   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:09:28.289       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:09:28.289       --no-huge             run without using hugepages
00:09:28.289       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:09:28.289   -i, --shm-id <id>         shared memory ID (optional)
00:09:28.289   -g, --single-file-segments   force creating just one hugetlbfs file
00:09:28.289  
00:09:28.289  PCI options:
00:09:28.289   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:09:28.289   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:09:28.289   -u, --no-pci              disable PCI access
00:09:28.289       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:09:28.289  
00:09:28.289  Log options:
00:09:28.289   -L, --logflag <flag>      enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 
00:09:28.289                             app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 
00:09:28.289                             bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 
00:09:28.289                             blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 
00:09:28.289                             blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 
00:09:28.289                             iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, nbd, 
00:09:28.289                             notify_rpc, nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, 
00:09:28.289                             scsi, sock, sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, 
00:09:28.289                             vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 
00:09:28.289                             vbdev_zone_block, vfio_pci, vfio_user, vhost, vhost_blk, vhost_blk_data, 
00:09:28.289                             vhost_ring, vhost_rpc, vhost_scsi, vhost_scsi_data, vhost_scsi_queue, 
00:09:28.289                             virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 
00:09:28.289                             virtio_vfio_user, vmd)
00:09:28.289       --silence-noticelog   disable notice level logging to stderr
00:09:28.289  
00:09:28.289  Trace options:
00:09:28.289       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:09:28.289                                   setting 0 to disable trace (default 32768)
00:09:28.289                                   Tracepoints vary in size and can use more than one trace entry.
00:09:28.289   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:09:28.289                             group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 
00:09:28.289                             ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 
00:09:28.289                             blob, bdev_raid, scheduler, all).
00:09:28.289                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:09:28.289                             a tracepoint group. First tpoint inside a group can be enabled by
00:09:28.289                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:09:28.289                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:09:28.289                             in /include/spdk_internal/trace_defs.h
00:09:28.289  
00:09:28.289  Other options:
00:09:28.289   -h, --help                show this usage
00:09:28.289   -v, --version             print SPDK version
00:09:28.289   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:09:28.289       --env-context         Opaque context for use of the env implementation
00:09:28.289  
00:09:28.289  Application specific:
00:09:28.289   -f <path>                 save pid to file under given path
00:09:28.289   -S <path>                 directory where to create vhost sockets (default: pwd)
00:09:28.289   10:37:18 vhost.vhost_negative -- other/negative.sh@50 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -c /path/to/non_existing_file/conf -f /root/vhost_test/vhost/vhost.pid
00:09:28.547  [2024-11-19 10:37:18.157343] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:28.547  [2024-11-19 10:37:18.157441] [ DPDK EAL parameters: vhost --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859371 ]
00:09:28.547  EAL: No free 2048 kB hugepages reported on node 1
00:09:28.547  [2024-11-19 10:37:18.296646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:28.805  [2024-11-19 10:37:18.401855] app.c: 973:spdk_app_start: *ERROR*: Read JSON configuration file /path/to/non_existing_file/conf failed: No such file or directory
00:09:29.063   10:37:18 vhost.vhost_negative -- other/negative.sh@53 -- # rm -f /root/vhost_test/vhost/vhost.pid
00:09:29.063   10:37:18 vhost.vhost_negative -- other/negative.sh@56 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -x -h
00:09:29.063  /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost: invalid option -- 'x'
00:09:29.063  /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost [options]
00:09:29.063  
00:09:29.063  CPU options:
00:09:29.063   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:09:29.063                                   (like [0,1,10])
00:09:29.063       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:09:29.063                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:09:29.063                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:09:29.063                             Within the group, '-' is used for range separator,
00:09:29.063                             ',' is used for single number separator.
00:09:29.064                             '( )' can be omitted for single element group,
00:09:29.064                             '@' can be omitted if cpus and lcores have the same value
00:09:29.064       --disable-cpumask-locks    Disable CPU core lock files.
00:09:29.064       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:09:29.064                             pollers in the app support interrupt mode)
00:09:29.064   -p, --main-core <id>      main (primary) core for DPDK
00:09:29.064  
00:09:29.064  Configuration options:
00:09:29.064   -c, --config, --json  <config>     JSON config file
00:09:29.064   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:09:29.064       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:09:29.064       --wait-for-rpc        wait for RPCs to initialize subsystems
00:09:29.064       --rpcs-allowed	   comma-separated list of permitted RPCS
00:09:29.064       --json-ignore-init-errors    don't exit on invalid config entry
00:09:29.064  
00:09:29.064  Memory options:
00:09:29.064       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:09:29.064       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:09:29.064       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:09:29.064   -R, --huge-unlink         unlink huge files after initialization
00:09:29.064   -n, --mem-channels <num>  number of memory channels used for DPDK
00:09:29.064   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:09:29.064       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:09:29.064       --no-huge             run without using hugepages
00:09:29.064       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:09:29.064   -i, --shm-id <id>         shared memory ID (optional)
00:09:29.064   -g, --single-file-segments   force creating just one hugetlbfs file
00:09:29.064  
00:09:29.064  PCI options:
00:09:29.064   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:09:29.064   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:09:29.064   -u, --no-pci              disable PCI access
00:09:29.064       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:09:29.064  
00:09:29.064  Log options:
00:09:29.064   -L, --logflag <flag>      enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 
00:09:29.064                             app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 
00:09:29.064                             bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 
00:09:29.064                             blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 
00:09:29.064                             blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 
00:09:29.064                             iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, nbd, 
00:09:29.064                             notify_rpc, nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, 
00:09:29.064                             scsi, sock, sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, 
00:09:29.064                             vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 
00:09:29.064                             vbdev_zone_block, vfio_pci, vfio_user, vhost, vhost_blk, vhost_blk_data, 
00:09:29.064                             vhost_ring, vhost_rpc, vhost_scsi, vhost_scsi_data, vhost_scsi_queue, 
00:09:29.064                             virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 
00:09:29.064                             virtio_vfio_user, vmd)
00:09:29.064       --silence-noticelog   disable notice level logging to stderr
00:09:29.064  
00:09:29.064  Trace options:
00:09:29.064       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:09:29.064                                   setting 0 to disable trace (default 32768)
00:09:29.064                                   Tracepoints vary in size and can use more than one trace entry.
00:09:29.064   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:09:29.064                             group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 
00:09:29.064                             ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 
00:09:29.064                             blob, bdev_raid, scheduler, all).
00:09:29.064                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:09:29.064                             a tracepoint group. First tpoint inside a group can be enabled by
00:09:29.064                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:09:29.064                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:09:29.064                             in /include/spdk_internal/trace_defs.h
00:09:29.064  
00:09:29.064  Other options:
00:09:29.064   -h, --help                show this usage
00:09:29.064   -v, --version             print SPDK version
00:09:29.064   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:09:29.064       --env-context         Opaque context for use of the env implementation
00:09:29.064  
00:09:29.064  Application specific:
00:09:29.064   -f <path>                 save pid to file under given path
00:09:29.064   -S <path>                 directory where to create vhost sockets (default: pwd)
00:09:29.064   10:37:18 vhost.vhost_negative -- other/negative.sh@61 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -t vhost_scsi -h
00:09:29.324  /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost: invalid option -- 't'
00:09:29.324  /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost [options]
00:09:29.324  
00:09:29.324  CPU options:
00:09:29.324   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:09:29.324                                   (like [0,1,10])
00:09:29.324       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:09:29.324                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:09:29.324                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:09:29.324                             Within the group, '-' is used for range separator,
00:09:29.324                             ',' is used for single number separator.
00:09:29.324                             '( )' can be omitted for single element group,
00:09:29.324                             '@' can be omitted if cpus and lcores have the same value
00:09:29.324       --disable-cpumask-locks    Disable CPU core lock files.
00:09:29.324       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:09:29.324                             pollers in the app support interrupt mode)
00:09:29.324   -p, --main-core <id>      main (primary) core for DPDK
00:09:29.324  
00:09:29.324  Configuration options:
00:09:29.324   -c, --config, --json  <config>     JSON config file
00:09:29.324   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:09:29.324       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:09:29.324       --wait-for-rpc        wait for RPCs to initialize subsystems
00:09:29.324       --rpcs-allowed	   comma-separated list of permitted RPCS
00:09:29.324       --json-ignore-init-errors    don't exit on invalid config entry
00:09:29.324  
00:09:29.324  Memory options:
00:09:29.324       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:09:29.324       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:09:29.324       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:09:29.324   -R, --huge-unlink         unlink huge files after initialization
00:09:29.324   -n, --mem-channels <num>  number of memory channels used for DPDK
00:09:29.324   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:09:29.324       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:09:29.324       --no-huge             run without using hugepages
00:09:29.324       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:09:29.324   -i, --shm-id <id>         shared memory ID (optional)
00:09:29.324   -g, --single-file-segments   force creating just one hugetlbfs file
00:09:29.324  
00:09:29.324  PCI options:
00:09:29.324   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:09:29.324   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:09:29.324   -u, --no-pci              disable PCI access
00:09:29.324       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:09:29.324  
00:09:29.324  Log options:
00:09:29.324   -L, --logflag <flag>      enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 
00:09:29.324                             app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 
00:09:29.324                             bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 
00:09:29.324                             blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 
00:09:29.324                             blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 
00:09:29.324                             iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, nbd, 
00:09:29.324                             notify_rpc, nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, 
00:09:29.324                             scsi, sock, sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, 
00:09:29.324                             vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 
00:09:29.324                             vbdev_zone_block, vfio_pci, vfio_user, vhost, vhost_blk, vhost_blk_data, 
00:09:29.324                             vhost_ring, vhost_rpc, vhost_scsi, vhost_scsi_data, vhost_scsi_queue, 
00:09:29.324                             virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 
00:09:29.324                             virtio_vfio_user, vmd)
00:09:29.324       --silence-noticelog   disable notice level logging to stderr
00:09:29.324  
00:09:29.324  Trace options:
00:09:29.324       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:09:29.324                                   setting 0 to disable trace (default 32768)
00:09:29.324                                   Tracepoints vary in size and can use more than one trace entry.
00:09:29.324   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:09:29.324                             group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 
00:09:29.324                             ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 
00:09:29.324                             blob, bdev_raid, scheduler, all).
00:09:29.324                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:09:29.324                             a tracepoint group. First tpoint inside a group can be enabled by
00:09:29.324                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:09:29.324                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:09:29.324                             in /include/spdk_internal/trace_defs.h
00:09:29.324  
00:09:29.324  Other options:
00:09:29.324   -h, --help                show this usage
00:09:29.324   -v, --version             print SPDK version
00:09:29.324   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:09:29.324       --env-context         Opaque context for use of the env implementation
00:09:29.324  
00:09:29.324  Application specific:
00:09:29.324   -f <path>                 save pid to file under given path
00:09:29.324   -S <path>                 directory where to create vhost sockets (default: pwd)
00:09:29.324   10:37:18 vhost.vhost_negative -- other/negative.sh@62 -- # warning 'vhost did not started with trace flags enabled but ignoring this as it might not be a debug build'
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@90 -- # message WARN 'vhost did not started with trace flags enabled but ignoring this as it might not be a debug build'
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=WARN
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'WARN: vhost did not started with trace flags enabled but ignoring this as it might not be a debug build'
00:09:29.324  WARN: vhost did not started with trace flags enabled but ignoring this as it might not be a debug build
00:09:29.324   10:37:18 vhost.vhost_negative -- other/negative.sh@66 -- # notice ===============
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO ===============
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: ==============='
00:09:29.324  INFO: ===============
00:09:29.324   10:37:18 vhost.vhost_negative -- other/negative.sh@67 -- # notice ''
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO ''
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: '
00:09:29.324  INFO: 
00:09:29.324   10:37:18 vhost.vhost_negative -- other/negative.sh@68 -- # notice 'running SPDK'
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'running SPDK'
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: running SPDK'
00:09:29.324  INFO: running SPDK
00:09:29.324   10:37:18 vhost.vhost_negative -- other/negative.sh@69 -- # notice ''
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO ''
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.324   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: '
00:09:29.325  INFO: 
00:09:29.325   10:37:18 vhost.vhost_negative -- other/negative.sh@70 -- # vhost_run -n 0 -- -m 0xf
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@116 -- # local OPTIND
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@117 -- # local vhost_name
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@118 -- # local run_gen_nvme=true
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@119 -- # local vhost_bin=vhost
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@120 -- # vhost_args=()
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@120 -- # local vhost_args
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@121 -- # cmd=()
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@121 -- # local cmd
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@124 -- # case "$optchar" in
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@125 -- # vhost_name=0
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@137 -- # shift 3
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@139 -- # vhost_args=("$@")
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@141 -- # [[ -z 0 ]]
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@146 -- # local vhost_dir
00:09:29.325    10:37:18 vhost.vhost_negative -- vhost/common.sh@147 -- # get_vhost_dir 0
00:09:29.325    10:37:18 vhost.vhost_negative -- vhost/common.sh@105 -- # local vhost_name=0
00:09:29.325    10:37:18 vhost.vhost_negative -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:29.325    10:37:18 vhost.vhost_negative -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@147 -- # vhost_dir=/root/vhost_test/vhost/0
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@148 -- # local vhost_app=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@149 -- # local vhost_log_file=/root/vhost_test/vhost/0/vhost.log
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@150 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@151 -- # local vhost_socket=/root/vhost_test/vhost/0/usvhost
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@152 -- # notice 'starting vhost app in background'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'starting vhost app in background'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: starting vhost app in background'
00:09:29.325  INFO: starting vhost app in background
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@153 -- # [[ -r /root/vhost_test/vhost/0/vhost.pid ]]
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@154 -- # [[ -d /root/vhost_test/vhost/0 ]]
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@155 -- # mkdir -p /root/vhost_test/vhost/0
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@157 -- # [[ ! -x /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost ]]
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@162 -- # cmd=("$vhost_app" "-r" "$vhost_dir/rpc.sock" "${vhost_args[@]}")
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@163 -- # [[ vhost =~ vhost ]]
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@164 -- # cmd+=(-S "$vhost_dir")
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@167 -- # notice 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:09:29.325  INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@168 -- # notice 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Socket:      /root/vhost_test/vhost/0/usvhost'
00:09:29.325  INFO: Socket:      /root/vhost_test/vhost/0/usvhost
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@169 -- # notice 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0'
00:09:29.325  INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@171 -- # timing_enter vhost_start
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@10 -- # set +x
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@173 -- # iobuf_small_count=16383
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@174 -- # iobuf_large_count=2047
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@177 -- # vhost_pid=1859562
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@178 -- # echo 1859562
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@180 -- # notice 'waiting for app to run...'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@176 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0 --wait-for-rpc
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'waiting for app to run...'
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: waiting for app to run...'
00:09:29.325  INFO: waiting for app to run...
00:09:29.325   10:37:18 vhost.vhost_negative -- vhost/common.sh@181 -- # waitforlisten 1859562 /root/vhost_test/vhost/0/rpc.sock
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@835 -- # '[' -z 1859562 ']'
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:09:29.325  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:29.325   10:37:18 vhost.vhost_negative -- common/autotest_common.sh@10 -- # set +x
00:09:29.325  [2024-11-19 10:37:19.045496] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:29.325  [2024-11-19 10:37:19.045596] [ DPDK EAL parameters: vhost --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859562 ]
00:09:29.586  EAL: No free 2048 kB hugepages reported on node 1
00:09:29.586  [2024-11-19 10:37:19.182254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:29.586  [2024-11-19 10:37:19.293973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:29.586  [2024-11-19 10:37:19.293982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:29.586  [2024-11-19 10:37:19.294070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:29.586  [2024-11-19 10:37:19.294081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:30.159   10:37:19 vhost.vhost_negative -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:30.159   10:37:19 vhost.vhost_negative -- common/autotest_common.sh@868 -- # return 0
00:09:30.159   10:37:19 vhost.vhost_negative -- vhost/common.sh@183 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock iobuf_set_options --small-pool-count=16383 --large-pool-count=2047
00:09:30.417   10:37:20 vhost.vhost_negative -- vhost/common.sh@188 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock framework_start_init
00:09:30.984   10:37:20 vhost.vhost_negative -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0 != *\-\-\n\o\-\p\c\i* ]]
00:09:30.984   10:37:20 vhost.vhost_negative -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -S /root/vhost_test/vhost/0 != *\-\u* ]]
00:09:30.984   10:37:20 vhost.vhost_negative -- vhost/common.sh@192 -- # true
00:09:30.984   10:37:20 vhost.vhost_negative -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:30.984   10:37:20 vhost.vhost_negative -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:09:32.359   10:37:22 vhost.vhost_negative -- vhost/common.sh@196 -- # notice 'vhost started - pid=1859562'
00:09:32.359   10:37:22 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'vhost started - pid=1859562'
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: vhost started - pid=1859562'
00:09:32.360  INFO: vhost started - pid=1859562
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@198 -- # timing_exit vhost_start
00:09:32.360   10:37:22 vhost.vhost_negative -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:32.360   10:37:22 vhost.vhost_negative -- common/autotest_common.sh@10 -- # set +x
00:09:32.360   10:37:22 vhost.vhost_negative -- other/negative.sh@71 -- # notice ''
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO ''
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:32.360   10:37:22 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: '
00:09:32.360  INFO: 
00:09:32.360    10:37:22 vhost.vhost_negative -- other/negative.sh@72 -- # get_vhost_dir 0
00:09:32.360    10:37:22 vhost.vhost_negative -- vhost/common.sh@105 -- # local vhost_name=0
00:09:32.360    10:37:22 vhost.vhost_negative -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:32.360    10:37:22 vhost.vhost_negative -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:32.360   10:37:22 vhost.vhost_negative -- other/negative.sh@72 -- # rpc_py='/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:32.360   10:37:22 vhost.vhost_negative -- other/negative.sh@73 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create -b Malloc0 128 4096
00:09:32.617  Malloc0
00:09:32.617   10:37:22 vhost.vhost_negative -- other/negative.sh@74 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create -b Malloc1 128 4096
00:09:33.184  Malloc1
00:09:33.184   10:37:22 vhost.vhost_negative -- other/negative.sh@75 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create -b Malloc2 128 4096
00:09:33.441  Malloc2
00:09:33.442   10:37:23 vhost.vhost_negative -- other/negative.sh@76 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_split_create Malloc2 8
00:09:33.700  Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7
00:09:33.700   10:37:23 vhost.vhost_negative -- other/negative.sh@79 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_get_controllers -n nonexistent
00:09:33.700  request:
00:09:33.700  {
00:09:33.700    "name": "nonexistent",
00:09:33.700    "method": "vhost_get_controllers",
00:09:33.700    "req_id": 1
00:09:33.700  }
00:09:33.700  Got JSON-RPC error response
00:09:33.700  response:
00:09:33.700  {
00:09:33.700    "code": -32603,
00:09:33.700    "message": "No such device"
00:09:33.700  }
00:09:33.700   10:37:23 vhost.vhost_negative -- other/negative.sh@83 -- # notice 'Set coalescing for nonexistent controller'
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Set coalescing for nonexistent controller'
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:33.700   10:37:23 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Set coalescing for nonexistent controller'
00:09:33.700  INFO: Set coalescing for nonexistent controller
00:09:33.700   10:37:23 vhost.vhost_negative -- other/negative.sh@84 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_controller_set_coalescing nonexistent 1 100
00:09:33.958  request:
00:09:33.958  {
00:09:33.958    "ctrlr": "nonexistent",
00:09:33.958    "delay_base_us": 1,
00:09:33.958    "iops_threshold": 100,
00:09:33.958    "method": "vhost_controller_set_coalescing",
00:09:33.958    "req_id": 1
00:09:33.958  }
00:09:33.958  Got JSON-RPC error response
00:09:33.958  response:
00:09:33.958  {
00:09:33.958    "code": -32602,
00:09:33.958    "message": "No such device"
00:09:33.958  }
00:09:33.958   10:37:23 vhost.vhost_negative -- other/negative.sh@89 -- # notice 'Trying to remove nonexistent controller'
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove nonexistent controller'
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:33.958   10:37:23 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove nonexistent controller'
00:09:33.958  INFO: Trying to remove nonexistent controller
00:09:33.958   10:37:23 vhost.vhost_negative -- other/negative.sh@90 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_delete_controller unk0
00:09:34.216   10:37:23 vhost.vhost_negative -- other/negative.sh@95 -- # notice 'Trying to create scsi controller with incorrect cpumask outside of application cpumask'
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create scsi controller with incorrect cpumask outside of application cpumask'
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:34.216   10:37:23 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create scsi controller with incorrect cpumask outside of application cpumask'
00:09:34.216  INFO: Trying to create scsi controller with incorrect cpumask outside of application cpumask
00:09:34.216   10:37:23 vhost.vhost_negative -- other/negative.sh@96 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_scsi_controller vhost.invalid.cpumask --cpumask 0xf0
00:09:34.474  [2024-11-19 10:37:24.046342] vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:09:34.474  [2024-11-19 10:37:24.046428] vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:09:34.474  request:
00:09:34.474  {
00:09:34.474    "ctrlr": "vhost.invalid.cpumask",
00:09:34.474    "delay": false,
00:09:34.474    "cpumask": "0xf0",
00:09:34.474    "method": "vhost_create_scsi_controller",
00:09:34.474    "req_id": 1
00:09:34.474  }
00:09:34.474  Got JSON-RPC error response
00:09:34.474  response:
00:09:34.474  {
00:09:34.474    "code": -32602,
00:09:34.474    "message": "Invalid argument"
00:09:34.474  }
00:09:34.474   10:37:24 vhost.vhost_negative -- other/negative.sh@100 -- # notice 'Trying to create scsi controller with incorrect cpumask partially outside of application cpumask'
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create scsi controller with incorrect cpumask partially outside of application cpumask'
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:34.474   10:37:24 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create scsi controller with incorrect cpumask partially outside of application cpumask'
00:09:34.474  INFO: Trying to create scsi controller with incorrect cpumask partially outside of application cpumask
00:09:34.474   10:37:24 vhost.vhost_negative -- other/negative.sh@101 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_scsi_controller vhost.invalid.cpumask --cpumask 0xff
00:09:34.474  [2024-11-19 10:37:24.246905] vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:09:34.474  [2024-11-19 10:37:24.246981] vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:09:34.474  request:
00:09:34.474  {
00:09:34.474    "ctrlr": "vhost.invalid.cpumask",
00:09:34.474    "delay": false,
00:09:34.474    "cpumask": "0xff",
00:09:34.474    "method": "vhost_create_scsi_controller",
00:09:34.474    "req_id": 1
00:09:34.474  }
00:09:34.474  Got JSON-RPC error response
00:09:34.474  response:
00:09:34.474  {
00:09:34.474    "code": -32602,
00:09:34.474    "message": "Invalid argument"
00:09:34.474  }
00:09:34.732   10:37:24 vhost.vhost_negative -- other/negative.sh@105 -- # notice 'Trying to remove device from nonexistent scsi controller'
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove device from nonexistent scsi controller'
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove device from nonexistent scsi controller'
00:09:34.732  INFO: Trying to remove device from nonexistent scsi controller
00:09:34.732   10:37:24 vhost.vhost_negative -- other/negative.sh@106 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target vhost.nonexistent.name 0
00:09:34.732  request:
00:09:34.732  {
00:09:34.732    "ctrlr": "vhost.nonexistent.name",
00:09:34.732    "scsi_target_num": 0,
00:09:34.732    "method": "vhost_scsi_controller_remove_target",
00:09:34.732    "req_id": 1
00:09:34.732  }
00:09:34.732  Got JSON-RPC error response
00:09:34.732  response:
00:09:34.732  {
00:09:34.732    "code": -32602,
00:09:34.732    "message": "No such device"
00:09:34.732  }
00:09:34.732   10:37:24 vhost.vhost_negative -- other/negative.sh@110 -- # notice 'Trying to add device to nonexistent scsi controller'
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to add device to nonexistent scsi controller'
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:34.732   10:37:24 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to add device to nonexistent scsi controller'
00:09:34.732  INFO: Trying to add device to nonexistent scsi controller
00:09:34.732   10:37:24 vhost.vhost_negative -- other/negative.sh@111 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target vhost.nonexistent.name 0 Malloc0
00:09:34.990  request:
00:09:34.990  {
00:09:34.990    "ctrlr": "vhost.nonexistent.name",
00:09:34.990    "scsi_target_num": 0,
00:09:34.990    "bdev_name": "Malloc0",
00:09:34.990    "method": "vhost_scsi_controller_add_target",
00:09:34.990    "req_id": 1
00:09:34.990  }
00:09:34.990  Got JSON-RPC error response
00:09:34.990  response:
00:09:34.990  {
00:09:34.990    "code": -32602,
00:09:34.990    "message": "No such device"
00:09:34.990  }
00:09:34.990   10:37:24 vhost.vhost_negative -- other/negative.sh@115 -- # notice 'Trying to create scsi controller with incorrect name'
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create scsi controller with incorrect name'
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:34.990   10:37:24 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create scsi controller with incorrect name'
00:09:34.990  INFO: Trying to create scsi controller with incorrect name
00:09:34.990   10:37:24 vhost.vhost_negative -- other/negative.sh@116 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_scsi_controller .
00:09:35.322  [2024-11-19 10:37:24.857518] rte_vhost_user.c:1614:vhost_register_unix_socket: *ERROR*: Cannot create a domain socket at path "/root/vhost_test/vhost/0/.": The file already exists and is not a socket.
00:09:35.322  request:
00:09:35.322  {
00:09:35.322    "ctrlr": ".",
00:09:35.322    "delay": false,
00:09:35.322    "method": "vhost_create_scsi_controller",
00:09:35.322    "req_id": 1
00:09:35.322  }
00:09:35.322  Got JSON-RPC error response
00:09:35.322  response:
00:09:35.322  {
00:09:35.322    "code": -32602,
00:09:35.322    "message": "Input/output error"
00:09:35.322  }
00:09:35.322   10:37:24 vhost.vhost_negative -- other/negative.sh@120 -- # notice 'Creating controller naa.0'
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Creating controller naa.0'
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:35.323   10:37:24 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Creating controller naa.0'
00:09:35.323  INFO: Creating controller naa.0
00:09:35.323   10:37:24 vhost.vhost_negative -- other/negative.sh@121 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_scsi_controller naa.0
00:09:35.323  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0) vhost-user server: socket created, fd: 342
00:09:35.323  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0) binding succeeded
00:09:35.323   10:37:25 vhost.vhost_negative -- other/negative.sh@123 -- # notice 'Pass invalid parameter for vhost_controller_set_coalescing'
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Pass invalid parameter for vhost_controller_set_coalescing'
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:35.323   10:37:25 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Pass invalid parameter for vhost_controller_set_coalescing'
00:09:35.323  INFO: Pass invalid parameter for vhost_controller_set_coalescing
00:09:35.323   10:37:25 vhost.vhost_negative -- other/negative.sh@124 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_controller_set_coalescing naa.0 -1 100
00:09:35.581  request:
00:09:35.581  {
00:09:35.581    "ctrlr": "naa.0",
00:09:35.581    "delay_base_us": -1,
00:09:35.581    "iops_threshold": 100,
00:09:35.581    "method": "vhost_controller_set_coalescing",
00:09:35.581    "req_id": 1
00:09:35.581  }
00:09:35.581  Got JSON-RPC error response
00:09:35.581  response:
00:09:35.581  {
00:09:35.581    "code": -32602,
00:09:35.581    "message": "Invalid argument"
00:09:35.581  }
00:09:35.581   10:37:25 vhost.vhost_negative -- other/negative.sh@128 -- # notice 'Trying to add nonexistent device to scsi controller'
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to add nonexistent device to scsi controller'
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:35.581   10:37:25 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to add nonexistent device to scsi controller'
00:09:35.581  INFO: Trying to add nonexistent device to scsi controller
00:09:35.581   10:37:25 vhost.vhost_negative -- other/negative.sh@129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 0 nonexistent_bdev
00:09:35.839  [2024-11-19 10:37:25.474400] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nonexistent_bdev
00:09:35.839  [2024-11-19 10:37:25.474443] lun.c: 445:scsi_lun_construct: *ERROR*: bdev nonexistent_bdev cannot be opened, error=-19
00:09:35.839  [2024-11-19 10:37:25.474461] vhost_scsi.c:1159:spdk_vhost_scsi_dev_add_tgt: *ERROR*: naa.0: couldn't create SCSI target 0 using bdev 'nonexistent_bdev'
00:09:35.839  request:
00:09:35.839  {
00:09:35.839    "ctrlr": "naa.0",
00:09:35.839    "scsi_target_num": 0,
00:09:35.839    "bdev_name": "nonexistent_bdev",
00:09:35.839    "method": "vhost_scsi_controller_add_target",
00:09:35.839    "req_id": 1
00:09:35.839  }
00:09:35.839  Got JSON-RPC error response
00:09:35.839  response:
00:09:35.839  {
00:09:35.839    "code": -32602,
00:09:35.839    "message": "Invalid argument"
00:09:35.839  }
00:09:35.839   10:37:25 vhost.vhost_negative -- other/negative.sh@133 -- # notice 'Adding device to naa.0 with slot number exceeding max'
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Adding device to naa.0 with slot number exceeding max'
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:35.839   10:37:25 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Adding device to naa.0 with slot number exceeding max'
00:09:35.839  INFO: Adding device to naa.0 with slot number exceeding max
00:09:35.839   10:37:25 vhost.vhost_negative -- other/negative.sh@134 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 8 Malloc0
00:09:36.097  [2024-11-19 10:37:25.674971] vhost_scsi.c:1127:spdk_vhost_scsi_dev_add_tgt: *ERROR*: naa.0: SCSI target number is too big (got 8, max 7), started from 0.
00:09:36.097  request:
00:09:36.097  {
00:09:36.097    "ctrlr": "naa.0",
00:09:36.097    "scsi_target_num": 8,
00:09:36.097    "bdev_name": "Malloc0",
00:09:36.097    "method": "vhost_scsi_controller_add_target",
00:09:36.097    "req_id": 1
00:09:36.097  }
00:09:36.097  Got JSON-RPC error response
00:09:36.097  response:
00:09:36.097  {
00:09:36.097    "code": -32602,
00:09:36.097    "message": "Invalid argument"
00:09:36.097  }
00:09:36.097    10:37:25 vhost.vhost_negative -- other/negative.sh@138 -- # seq 0 7
00:09:36.097   10:37:25 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:36.097   10:37:25 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p0
00:09:36.355  0
00:09:36.355   10:37:25 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:36.355   10:37:25 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p1
00:09:36.355  1
00:09:36.355   10:37:26 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:36.355   10:37:26 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p2
00:09:36.613  2
00:09:36.613   10:37:26 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:36.613   10:37:26 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p3
00:09:36.870  3
00:09:36.870   10:37:26 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:36.870   10:37:26 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p4
00:09:37.129  4
00:09:37.129   10:37:26 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:37.129   10:37:26 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p5
00:09:37.129  5
00:09:37.387   10:37:26 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:37.387   10:37:26 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p6
00:09:37.387  6
00:09:37.387   10:37:27 vhost.vhost_negative -- other/negative.sh@138 -- # for i in $(seq 0 7)
00:09:37.387   10:37:27 vhost.vhost_negative -- other/negative.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc2p7
00:09:37.645  7
00:09:37.645   10:37:27 vhost.vhost_negative -- other/negative.sh@141 -- # notice 'All slots are occupied. Try to add one more device to naa.0'
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'All slots are occupied. Try to add one more device to naa.0'
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:37.645   10:37:27 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: All slots are occupied. Try to add one more device to naa.0'
00:09:37.645  INFO: All slots are occupied. Try to add one more device to naa.0
00:09:37.645   10:37:27 vhost.vhost_negative -- other/negative.sh@142 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 -1 Malloc0
00:09:37.903  [2024-11-19 10:37:27.504344] vhost_scsi.c:1122:spdk_vhost_scsi_dev_add_tgt: *ERROR*: naa.0: all SCSI target slots are already in use.
00:09:37.903  request:
00:09:37.903  {
00:09:37.903    "ctrlr": "naa.0",
00:09:37.903    "scsi_target_num": -1,
00:09:37.903    "bdev_name": "Malloc0",
00:09:37.903    "method": "vhost_scsi_controller_add_target",
00:09:37.903    "req_id": 1
00:09:37.903  }
00:09:37.903  Got JSON-RPC error response
00:09:37.903  response:
00:09:37.903  {
00:09:37.903    "code": -32602,
00:09:37.903    "message": "No space left on device"
00:09:37.903  }
00:09:37.903    10:37:27 vhost.vhost_negative -- other/negative.sh@145 -- # seq 0 7
00:09:37.903   10:37:27 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:37.903   10:37:27 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 0
00:09:38.161   10:37:27 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:38.161   10:37:27 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 1
00:09:38.420   10:37:27 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:38.420   10:37:27 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 2
00:09:38.420   10:37:28 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:38.420   10:37:28 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 3
00:09:38.678   10:37:28 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:38.678   10:37:28 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 4
00:09:38.935   10:37:28 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:38.935   10:37:28 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 5
00:09:39.193   10:37:28 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:39.193   10:37:28 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 6
00:09:39.193   10:37:28 vhost.vhost_negative -- other/negative.sh@145 -- # for i in $(seq 0 7)
00:09:39.193   10:37:28 vhost.vhost_negative -- other/negative.sh@146 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 7
00:09:39.451   10:37:29 vhost.vhost_negative -- other/negative.sh@149 -- # notice 'Adding initial device (0) to naa.0'
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Adding initial device (0) to naa.0'
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:39.451   10:37:29 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Adding initial device (0) to naa.0'
00:09:39.451  INFO: Adding initial device (0) to naa.0
00:09:39.451   10:37:29 vhost.vhost_negative -- other/negative.sh@150 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 0 Malloc0
00:09:39.708  0
00:09:39.708   10:37:29 vhost.vhost_negative -- other/negative.sh@152 -- # notice 'Adding device to naa.0 with slot number 0'
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Adding device to naa.0 with slot number 0'
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:39.708   10:37:29 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Adding device to naa.0 with slot number 0'
00:09:39.708  INFO: Adding device to naa.0 with slot number 0
00:09:39.708   10:37:29 vhost.vhost_negative -- other/negative.sh@153 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0 0 Malloc1
00:09:39.966  [2024-11-19 10:37:29.538350] vhost_scsi.c:1140:spdk_vhost_scsi_dev_add_tgt: *ERROR*: naa.0: SCSI target 0 already occupied
00:09:39.966  request:
00:09:39.966  {
00:09:39.966    "ctrlr": "naa.0",
00:09:39.966    "scsi_target_num": 0,
00:09:39.966    "bdev_name": "Malloc1",
00:09:39.966    "method": "vhost_scsi_controller_add_target",
00:09:39.966    "req_id": 1
00:09:39.966  }
00:09:39.966  Got JSON-RPC error response
00:09:39.966  response:
00:09:39.966  {
00:09:39.966    "code": -32602,
00:09:39.966    "message": "File exists"
00:09:39.966  }
00:09:39.966   10:37:29 vhost.vhost_negative -- other/negative.sh@157 -- # notice 'Trying to remove nonexistent device on existing controller'
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove nonexistent device on existing controller'
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:39.966   10:37:29 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove nonexistent device on existing controller'
00:09:39.966  INFO: Trying to remove nonexistent device on existing controller
00:09:39.966   10:37:29 vhost.vhost_negative -- other/negative.sh@158 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 1
00:09:40.224   10:37:29 vhost.vhost_negative -- other/negative.sh@162 -- # notice 'Trying to remove existing device from a controller'
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove existing device from a controller'
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove existing device from a controller'
00:09:40.224  INFO: Trying to remove existing device from a controller
00:09:40.224   10:37:29 vhost.vhost_negative -- other/negative.sh@163 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 0
00:09:40.224   10:37:29 vhost.vhost_negative -- other/negative.sh@165 -- # notice 'Trying to remove a just-deleted device from a controller again'
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove a just-deleted device from a controller again'
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:40.224   10:37:29 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove a just-deleted device from a controller again'
00:09:40.224  INFO: Trying to remove a just-deleted device from a controller again
00:09:40.224   10:37:29 vhost.vhost_negative -- other/negative.sh@166 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 0
00:09:40.482   10:37:30 vhost.vhost_negative -- other/negative.sh@170 -- # notice 'Trying to remove scsi target with invalid slot number'
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove scsi target with invalid slot number'
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:40.482   10:37:30 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove scsi target with invalid slot number'
00:09:40.482  INFO: Trying to remove scsi target with invalid slot number
00:09:40.482   10:37:30 vhost.vhost_negative -- other/negative.sh@171 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0 8
00:09:40.739  [2024-11-19 10:37:30.360773] vhost_scsi.c:1240:spdk_vhost_scsi_dev_remove_tgt: *ERROR*: naa.0: invalid SCSI target number 8
00:09:40.739   10:37:30 vhost.vhost_negative -- other/negative.sh@176 -- # notice 'Trying to create block controller with incorrect cpumask outside of application cpumask'
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create block controller with incorrect cpumask outside of application cpumask'
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:40.739   10:37:30 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create block controller with incorrect cpumask outside of application cpumask'
00:09:40.739  INFO: Trying to create block controller with incorrect cpumask outside of application cpumask
00:09:40.740   10:37:30 vhost.vhost_negative -- other/negative.sh@177 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_blk_controller vhost.invalid.cpumask Malloc0 --cpumask 0xf0
00:09:40.997  [2024-11-19 10:37:30.577508] vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:09:40.997  [2024-11-19 10:37:30.577595] vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:09:40.997  request:
00:09:40.997  {
00:09:40.997    "ctrlr": "vhost.invalid.cpumask",
00:09:40.997    "dev_name": "Malloc0",
00:09:40.997    "cpumask": "0xf0",
00:09:40.997    "readonly": false,
00:09:40.997    "packed_ring": false,
00:09:40.997    "method": "vhost_create_blk_controller",
00:09:40.997    "req_id": 1
00:09:40.997  }
00:09:40.997  Got JSON-RPC error response
00:09:40.997  response:
00:09:40.997  {
00:09:40.997    "code": -32602,
00:09:40.997    "message": "Invalid argument"
00:09:40.997  }
00:09:40.997   10:37:30 vhost.vhost_negative -- other/negative.sh@181 -- # notice 'Trying to create block controller with incorrect cpumask partially outside of application cpumask'
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create block controller with incorrect cpumask partially outside of application cpumask'
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:40.997   10:37:30 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create block controller with incorrect cpumask partially outside of application cpumask'
00:09:40.997  INFO: Trying to create block controller with incorrect cpumask partially outside of application cpumask
00:09:40.997   10:37:30 vhost.vhost_negative -- other/negative.sh@182 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_blk_controller vhost.invalid.cpumask Malloc0 --cpumask 0xff
00:09:41.255  [2024-11-19 10:37:30.790094] vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:09:41.255  [2024-11-19 10:37:30.790176] vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:09:41.255  request:
00:09:41.255  {
00:09:41.255    "ctrlr": "vhost.invalid.cpumask",
00:09:41.255    "dev_name": "Malloc0",
00:09:41.255    "cpumask": "0xff",
00:09:41.255    "readonly": false,
00:09:41.255    "packed_ring": false,
00:09:41.255    "method": "vhost_create_blk_controller",
00:09:41.255    "req_id": 1
00:09:41.255  }
00:09:41.255  Got JSON-RPC error response
00:09:41.255  response:
00:09:41.255  {
00:09:41.255    "code": -32602,
00:09:41.255    "message": "Invalid argument"
00:09:41.255  }
00:09:41.255   10:37:30 vhost.vhost_negative -- other/negative.sh@186 -- # notice 'Trying to remove nonexistent block controller'
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to remove nonexistent block controller'
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:41.255   10:37:30 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to remove nonexistent block controller'
00:09:41.255  INFO: Trying to remove nonexistent block controller
00:09:41.255   10:37:30 vhost.vhost_negative -- other/negative.sh@187 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_delete_controller vhost.nonexistent.name
00:09:41.255  request:
00:09:41.255  {
00:09:41.255    "ctrlr": "vhost.nonexistent.name",
00:09:41.255    "method": "vhost_delete_controller",
00:09:41.255    "req_id": 1
00:09:41.255  }
00:09:41.255  Got JSON-RPC error response
00:09:41.255  response:
00:09:41.255  {
00:09:41.255    "code": -32602,
00:09:41.255    "message": "No such device"
00:09:41.255  }
00:09:41.255   10:37:31 vhost.vhost_negative -- other/negative.sh@191 -- # notice 'Trying to create block controller with incorrect name'
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create block controller with incorrect name'
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:41.255   10:37:31 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create block controller with incorrect name'
00:09:41.255  INFO: Trying to create block controller with incorrect name
00:09:41.255   10:37:31 vhost.vhost_negative -- other/negative.sh@192 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_blk_controller . Malloc0
00:09:41.513  [2024-11-19 10:37:31.196274] rte_vhost_user.c:1614:vhost_register_unix_socket: *ERROR*: Cannot create a domain socket at path "/root/vhost_test/vhost/0/.": The file already exists and is not a socket.
00:09:41.513  request:
00:09:41.513  {
00:09:41.513    "ctrlr": ".",
00:09:41.513    "dev_name": "Malloc0",
00:09:41.513    "readonly": false,
00:09:41.513    "packed_ring": false,
00:09:41.513    "method": "vhost_create_blk_controller",
00:09:41.513    "req_id": 1
00:09:41.513  }
00:09:41.513  Got JSON-RPC error response
00:09:41.513  response:
00:09:41.513  {
00:09:41.513    "code": -32602,
00:09:41.513    "message": "Input/output error"
00:09:41.513  }
00:09:41.513   10:37:31 vhost.vhost_negative -- other/negative.sh@196 -- # notice 'Trying to create block controller with nonexistent bdev'
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create block controller with nonexistent bdev'
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:41.513   10:37:31 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create block controller with nonexistent bdev'
00:09:41.513  INFO: Trying to create block controller with nonexistent bdev
00:09:41.513   10:37:31 vhost.vhost_negative -- other/negative.sh@197 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_blk_controller blk_ctrl Malloc3
00:09:41.770  [2024-11-19 10:37:31.395823] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:09:41.770  [2024-11-19 10:37:31.395859] vhost_blk.c:1641:spdk_vhost_blk_construct: *ERROR*: blk_ctrl: could not open bdev 'Malloc3', error=-19
00:09:41.770  request:
00:09:41.770  {
00:09:41.770    "ctrlr": "blk_ctrl",
00:09:41.770    "dev_name": "Malloc3",
00:09:41.770    "readonly": false,
00:09:41.770    "packed_ring": false,
00:09:41.770    "method": "vhost_create_blk_controller",
00:09:41.770    "req_id": 1
00:09:41.770  }
00:09:41.770  Got JSON-RPC error response
00:09:41.770  response:
00:09:41.770  {
00:09:41.770    "code": -32602,
00:09:41.770    "message": "No such device"
00:09:41.770  }
00:09:41.770   10:37:31 vhost.vhost_negative -- other/negative.sh@201 -- # notice 'Trying to create block controller with claimed bdev'
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create block controller with claimed bdev'
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:41.770   10:37:31 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create block controller with claimed bdev'
00:09:41.770  INFO: Trying to create block controller with claimed bdev
00:09:41.770   10:37:31 vhost.vhost_negative -- other/negative.sh@202 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create_lvstore Malloc0 lvs
00:09:42.028  a40cd487-c35e-4cfe-b934-df90781f8229
00:09:42.028   10:37:31 vhost.vhost_negative -- other/negative.sh@203 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_blk_controller blk_ctrl Malloc0
00:09:42.285  [2024-11-19 10:37:31.831464] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type read_many_write_one by module lvol
00:09:42.285  [2024-11-19 10:37:31.831522] vhost_blk.c:1641:spdk_vhost_blk_construct: *ERROR*: blk_ctrl: could not open bdev 'Malloc0', error=-1
00:09:42.285  request:
00:09:42.285  {
00:09:42.285    "ctrlr": "blk_ctrl",
00:09:42.285    "dev_name": "Malloc0",
00:09:42.285    "readonly": false,
00:09:42.285    "packed_ring": false,
00:09:42.285    "method": "vhost_create_blk_controller",
00:09:42.285    "req_id": 1
00:09:42.285  }
00:09:42.285  Got JSON-RPC error response
00:09:42.285  response:
00:09:42.285  {
00:09:42.285    "code": -32602,
00:09:42.285    "message": "Operation not permitted"
00:09:42.285  }
00:09:42.285   10:37:31 vhost.vhost_negative -- other/negative.sh@206 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_delete_lvstore -l lvs
00:09:42.285   10:37:32 vhost.vhost_negative -- other/negative.sh@208 -- # notice 'Trying to create already existing block transport layer'
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Trying to create already existing block transport layer'
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:42.285   10:37:32 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Trying to create already existing block transport layer'
00:09:42.285  INFO: Trying to create already existing block transport layer
00:09:42.285   10:37:32 vhost.vhost_negative -- other/negative.sh@211 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock virtio_blk_create_transport vhost_user_blk
00:09:42.543  request:
00:09:42.543  {
00:09:42.543    "name": "vhost_user_blk",
00:09:42.544    "method": "virtio_blk_create_transport",
00:09:42.544    "req_id": 1
00:09:42.544  }
00:09:42.544  Got JSON-RPC error response
00:09:42.544  response:
00:09:42.544  {
00:09:42.544    "code": -17,
00:09:42.544    "message": "File exists"
00:09:42.544  }
00:09:42.544   10:37:32 vhost.vhost_negative -- other/negative.sh@215 -- # notice 'Testing done -> shutting down'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'Testing done -> shutting down'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: Testing done -> shutting down'
00:09:42.544  INFO: Testing done -> shutting down
00:09:42.544   10:37:32 vhost.vhost_negative -- other/negative.sh@216 -- # notice 'killing vhost app'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'killing vhost app'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost app'
00:09:42.544  INFO: killing vhost app
00:09:42.544   10:37:32 vhost.vhost_negative -- other/negative.sh@217 -- # vhost_kill 0
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@202 -- # local rc=0
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@203 -- # local vhost_name=0
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@210 -- # local vhost_dir
00:09:42.544    10:37:32 vhost.vhost_negative -- vhost/common.sh@211 -- # get_vhost_dir 0
00:09:42.544    10:37:32 vhost.vhost_negative -- vhost/common.sh@105 -- # local vhost_name=0
00:09:42.544    10:37:32 vhost.vhost_negative -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:42.544    10:37:32 vhost.vhost_negative -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:09:42.544   10:37:32 vhost.vhost_negative -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:42.544   10:37:32 vhost.vhost_negative -- common/autotest_common.sh@10 -- # set +x
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@220 -- # local vhost_pid
00:09:42.544    10:37:32 vhost.vhost_negative -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@221 -- # vhost_pid=1859562
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1859562) app'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1859562) app'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1859562) app'
00:09:42.544  INFO: killing vhost (PID 1859562) app
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@224 -- # kill -INT 1859562
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:42.544  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i = 0 ))
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@227 -- # kill -0 1859562
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@228 -- # echo .
00:09:42.544  .
00:09:42.544   10:37:32 vhost.vhost_negative -- vhost/common.sh@229 -- # sleep 1
00:09:43.919   10:37:33 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i++ ))
00:09:43.919   10:37:33 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:43.919   10:37:33 vhost.vhost_negative -- vhost/common.sh@227 -- # kill -0 1859562
00:09:43.919   10:37:33 vhost.vhost_negative -- vhost/common.sh@228 -- # echo .
00:09:43.919  .
00:09:43.919   10:37:33 vhost.vhost_negative -- vhost/common.sh@229 -- # sleep 1
00:09:44.853   10:37:34 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i++ ))
00:09:44.853   10:37:34 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:44.853   10:37:34 vhost.vhost_negative -- vhost/common.sh@227 -- # kill -0 1859562
00:09:44.853   10:37:34 vhost.vhost_negative -- vhost/common.sh@228 -- # echo .
00:09:44.853  .
00:09:44.853   10:37:34 vhost.vhost_negative -- vhost/common.sh@229 -- # sleep 1
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i++ ))
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@227 -- # kill -0 1859562
00:09:45.789  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1859562) - No such process
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@231 -- # break
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@234 -- # kill -0 1859562
00:09:45.789  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1859562) - No such process
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@239 -- # kill -0 1859562
00:09:45.789  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1859562) - No such process
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@245 -- # is_pid_child 1859562
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1668 -- # local pid=1859562 _pid
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1670 -- # read -r _pid
00:09:45.789    10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1667 -- # jobs -pr
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1670 -- # read -r _pid
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1674 -- # return 1
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@10 -- # set +x
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@261 -- # return 0
00:09:45.789   10:37:35 vhost.vhost_negative -- other/negative.sh@219 -- # notice 'EXIT DONE'
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO 'EXIT DONE'
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: EXIT DONE'
00:09:45.789  INFO: EXIT DONE
00:09:45.789   10:37:35 vhost.vhost_negative -- other/negative.sh@220 -- # notice ===============
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@94 -- # message INFO ===============
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@60 -- # local verbose_out
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@61 -- # false
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@62 -- # verbose_out=
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@70 -- # shift
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@71 -- # echo -e 'INFO: ==============='
00:09:45.789  INFO: ===============
00:09:45.789   10:37:35 vhost.vhost_negative -- other/negative.sh@222 -- # vhosttestfini
00:09:45.789   10:37:35 vhost.vhost_negative -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:09:45.789  
00:09:45.789  real	0m17.696s
00:09:45.789  user	1m9.931s
00:09:45.789  sys	0m3.296s
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:45.789   10:37:35 vhost.vhost_negative -- common/autotest_common.sh@10 -- # set +x
00:09:45.789  ************************************
00:09:45.789  END TEST vhost_negative
00:09:45.789  ************************************
00:09:45.789   10:37:35 vhost -- vhost/vhost.sh@23 -- # run_test vhost_boot /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/vhost_boot.sh --vm_image=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:45.789   10:37:35 vhost -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:45.789   10:37:35 vhost -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:45.789   10:37:35 vhost -- common/autotest_common.sh@10 -- # set +x
00:09:45.789  ************************************
00:09:45.789  START TEST vhost_boot
00:09:45.789  ************************************
00:09:45.789   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/vhost_boot.sh --vm_image=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:45.789  +++ dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/vhost_boot.sh
00:09:45.789  ++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.789  + testdir=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.789  ++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/../../..
00:09:45.789  + rootdir=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:09:45.789  + source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh
00:09:45.789  ++ rpc_py=rpc_cmd
00:09:45.789  ++ set -e
00:09:45.789  ++ shopt -s nullglob
00:09:45.789  ++ shopt -s extglob
00:09:45.789  ++ shopt -s inherit_errexit
00:09:45.789  ++ '[' -z /var/jenkins/workspace/vhost-phy-autotest/spdk/../output ']'
00:09:45.789  ++ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/build_config.sh ]]
00:09:45.789  ++ source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/build_config.sh
00:09:45.789  +++ CONFIG_WPDK_DIR=
00:09:45.789  +++ CONFIG_ASAN=y
00:09:45.789  +++ CONFIG_VBDEV_COMPRESS=n
00:09:45.789  +++ CONFIG_HAVE_EXECINFO_H=y
00:09:45.789  +++ CONFIG_USDT=n
00:09:45.789  +++ CONFIG_CUSTOMOCF=n
00:09:45.789  +++ CONFIG_PREFIX=/usr/local
00:09:45.789  +++ CONFIG_RBD=n
00:09:45.789  +++ CONFIG_LIBDIR=
00:09:45.789  +++ CONFIG_IDXD=y
00:09:45.789  +++ CONFIG_NVME_CUSE=y
00:09:45.789  +++ CONFIG_SMA=n
00:09:45.789  +++ CONFIG_VTUNE=n
00:09:45.789  +++ CONFIG_TSAN=n
00:09:45.789  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:09:45.789  +++ CONFIG_VFIO_USER_DIR=
00:09:45.789  +++ CONFIG_MAX_NUMA_NODES=1
00:09:45.789  +++ CONFIG_PGO_CAPTURE=n
00:09:45.789  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:09:45.789  +++ CONFIG_ENV=/var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk
00:09:45.789  +++ CONFIG_LTO=n
00:09:45.789  +++ CONFIG_ISCSI_INITIATOR=y
00:09:45.789  +++ CONFIG_CET=n
00:09:45.790  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:09:45.790  +++ CONFIG_OCF_PATH=
00:09:45.790  +++ CONFIG_RDMA_SET_TOS=y
00:09:45.790  +++ CONFIG_AIO_FSDEV=y
00:09:45.790  +++ CONFIG_HAVE_ARC4RANDOM=y
00:09:45.790  +++ CONFIG_HAVE_LIBARCHIVE=n
00:09:45.790  +++ CONFIG_UBLK=y
00:09:45.790  +++ CONFIG_ISAL_CRYPTO=y
00:09:45.790  +++ CONFIG_OPENSSL_PATH=
00:09:45.790  +++ CONFIG_OCF=n
00:09:45.790  +++ CONFIG_FUSE=n
00:09:45.790  +++ CONFIG_VTUNE_DIR=
00:09:45.790  +++ CONFIG_FUZZER_LIB=
00:09:45.790  +++ CONFIG_FUZZER=n
00:09:45.790  +++ CONFIG_FSDEV=y
00:09:45.790  +++ CONFIG_DPDK_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build
00:09:45.790  +++ CONFIG_CRYPTO=n
00:09:45.790  +++ CONFIG_PGO_USE=n
00:09:45.790  +++ CONFIG_VHOST=y
00:09:45.790  +++ CONFIG_DAOS=n
00:09:45.790  +++ CONFIG_DPDK_INC_DIR=
00:09:45.790  +++ CONFIG_DAOS_DIR=
00:09:45.790  +++ CONFIG_UNIT_TESTS=n
00:09:45.790  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:09:45.790  +++ CONFIG_VIRTIO=y
00:09:45.790  +++ CONFIG_DPDK_UADK=n
00:09:45.790  +++ CONFIG_COVERAGE=y
00:09:45.790  +++ CONFIG_RDMA=y
00:09:45.790  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:09:45.790  +++ CONFIG_HAVE_LZ4=n
00:09:45.790  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:09:45.790  +++ CONFIG_URING_PATH=
00:09:45.790  +++ CONFIG_XNVME=n
00:09:45.790  +++ CONFIG_VFIO_USER=n
00:09:45.790  +++ CONFIG_ARCH=native
00:09:45.790  +++ CONFIG_HAVE_EVP_MAC=y
00:09:45.790  +++ CONFIG_URING_ZNS=n
00:09:45.790  +++ CONFIG_WERROR=y
00:09:45.790  +++ CONFIG_HAVE_LIBBSD=n
00:09:45.790  +++ CONFIG_UBSAN=y
00:09:45.790  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:09:45.790  +++ CONFIG_IPSEC_MB_DIR=
00:09:45.790  +++ CONFIG_GOLANG=n
00:09:45.790  +++ CONFIG_ISAL=y
00:09:45.790  +++ CONFIG_IDXD_KERNEL=y
00:09:45.790  +++ CONFIG_DPDK_LIB_DIR=
00:09:45.790  +++ CONFIG_RDMA_PROV=verbs
00:09:45.790  +++ CONFIG_APPS=y
00:09:45.790  +++ CONFIG_SHARED=y
00:09:45.790  +++ CONFIG_HAVE_KEYUTILS=y
00:09:45.790  +++ CONFIG_FC_PATH=
00:09:45.790  +++ CONFIG_DPDK_PKG_CONFIG=n
00:09:45.790  +++ CONFIG_FC=n
00:09:45.790  +++ CONFIG_AVAHI=n
00:09:45.790  +++ CONFIG_FIO_PLUGIN=y
00:09:45.790  +++ CONFIG_RAID5F=n
00:09:45.790  +++ CONFIG_EXAMPLES=y
00:09:45.790  +++ CONFIG_TESTS=y
00:09:45.790  +++ CONFIG_CRYPTO_MLX5=n
00:09:45.790  +++ CONFIG_MAX_LCORES=128
00:09:45.790  +++ CONFIG_IPSEC_MB=n
00:09:45.790  +++ CONFIG_PGO_DIR=
00:09:45.790  +++ CONFIG_DEBUG=y
00:09:45.790  +++ CONFIG_DPDK_COMPRESSDEV=n
00:09:45.790  +++ CONFIG_CROSS_PREFIX=
00:09:45.790  +++ CONFIG_COPY_FILE_RANGE=y
00:09:45.790  +++ CONFIG_URING=n
00:09:45.790  ++ source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/applications.sh
00:09:45.790  +++++ dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/applications.sh
00:09:45.790  ++++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common
00:09:45.790  +++ _root=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/common
00:09:45.790  +++ _root=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:09:45.790  +++ _app_dir=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin
00:09:45.790  +++ _test_app_dir=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/app
00:09:45.790  +++ _examples_dir=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/examples
00:09:45.790  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:09:45.790  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:09:45.790  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:09:45.790  +++ VHOST_APP=("$_app_dir/vhost")
00:09:45.790  +++ DD_APP=("$_app_dir/spdk_dd")
00:09:45.790  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:09:45.790  +++ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/include/spdk/config.h ]]
00:09:45.790  +++ [[ #ifndef SPDK_CONFIG_H
00:09:45.790  #define SPDK_CONFIG_H
00:09:45.790  #define SPDK_CONFIG_AIO_FSDEV 1
00:09:45.790  #define SPDK_CONFIG_APPS 1
00:09:45.790  #define SPDK_CONFIG_ARCH native
00:09:45.790  #define SPDK_CONFIG_ASAN 1
00:09:45.790  #undef SPDK_CONFIG_AVAHI
00:09:45.790  #undef SPDK_CONFIG_CET
00:09:45.790  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:09:45.790  #define SPDK_CONFIG_COVERAGE 1
00:09:45.790  #define SPDK_CONFIG_CROSS_PREFIX 
00:09:45.790  #undef SPDK_CONFIG_CRYPTO
00:09:45.790  #undef SPDK_CONFIG_CRYPTO_MLX5
00:09:45.790  #undef SPDK_CONFIG_CUSTOMOCF
00:09:45.790  #undef SPDK_CONFIG_DAOS
00:09:45.790  #define SPDK_CONFIG_DAOS_DIR 
00:09:45.790  #define SPDK_CONFIG_DEBUG 1
00:09:45.790  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:09:45.790  #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build
00:09:45.790  #define SPDK_CONFIG_DPDK_INC_DIR 
00:09:45.790  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:09:45.790  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:09:45.790  #undef SPDK_CONFIG_DPDK_UADK
00:09:45.790  #define SPDK_CONFIG_ENV /var/jenkins/workspace/vhost-phy-autotest/spdk/lib/env_dpdk
00:09:45.790  #define SPDK_CONFIG_EXAMPLES 1
00:09:45.790  #undef SPDK_CONFIG_FC
00:09:45.790  #define SPDK_CONFIG_FC_PATH 
00:09:45.790  #define SPDK_CONFIG_FIO_PLUGIN 1
00:09:45.790  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:09:45.790  #define SPDK_CONFIG_FSDEV 1
00:09:45.790  #undef SPDK_CONFIG_FUSE
00:09:45.790  #undef SPDK_CONFIG_FUZZER
00:09:45.790  #define SPDK_CONFIG_FUZZER_LIB 
00:09:45.790  #undef SPDK_CONFIG_GOLANG
00:09:45.790  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:09:45.790  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:09:45.790  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:09:45.790  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:09:45.790  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:09:45.790  #undef SPDK_CONFIG_HAVE_LIBBSD
00:09:45.790  #undef SPDK_CONFIG_HAVE_LZ4
00:09:45.790  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:09:45.790  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:09:45.790  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:09:45.790  #define SPDK_CONFIG_IDXD 1
00:09:45.790  #define SPDK_CONFIG_IDXD_KERNEL 1
00:09:45.790  #undef SPDK_CONFIG_IPSEC_MB
00:09:45.790  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:09:45.790  #define SPDK_CONFIG_ISAL 1
00:09:45.790  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:09:45.790  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:09:45.790  #define SPDK_CONFIG_LIBDIR 
00:09:45.790  #undef SPDK_CONFIG_LTO
00:09:45.790  #define SPDK_CONFIG_MAX_LCORES 128
00:09:45.790  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:09:45.790  #define SPDK_CONFIG_NVME_CUSE 1
00:09:45.790  #undef SPDK_CONFIG_OCF
00:09:45.790  #define SPDK_CONFIG_OCF_PATH 
00:09:45.790  #define SPDK_CONFIG_OPENSSL_PATH 
00:09:45.790  #undef SPDK_CONFIG_PGO_CAPTURE
00:09:45.790  #define SPDK_CONFIG_PGO_DIR 
00:09:45.790  #undef SPDK_CONFIG_PGO_USE
00:09:45.790  #define SPDK_CONFIG_PREFIX /usr/local
00:09:45.790  #undef SPDK_CONFIG_RAID5F
00:09:45.790  #undef SPDK_CONFIG_RBD
00:09:45.790  #define SPDK_CONFIG_RDMA 1
00:09:45.790  #define SPDK_CONFIG_RDMA_PROV verbs
00:09:45.790  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:09:45.790  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:09:45.790  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:09:45.790  #define SPDK_CONFIG_SHARED 1
00:09:45.791  #undef SPDK_CONFIG_SMA
00:09:45.791  #define SPDK_CONFIG_TESTS 1
00:09:45.791  #undef SPDK_CONFIG_TSAN
00:09:45.791  #define SPDK_CONFIG_UBLK 1
00:09:45.791  #define SPDK_CONFIG_UBSAN 1
00:09:45.791  #undef SPDK_CONFIG_UNIT_TESTS
00:09:45.791  #undef SPDK_CONFIG_URING
00:09:45.791  #define SPDK_CONFIG_URING_PATH 
00:09:45.791  #undef SPDK_CONFIG_URING_ZNS
00:09:45.791  #undef SPDK_CONFIG_USDT
00:09:45.791  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:09:45.791  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:09:45.791  #undef SPDK_CONFIG_VFIO_USER
00:09:45.791  #define SPDK_CONFIG_VFIO_USER_DIR 
00:09:45.791  #define SPDK_CONFIG_VHOST 1
00:09:45.791  #define SPDK_CONFIG_VIRTIO 1
00:09:45.791  #undef SPDK_CONFIG_VTUNE
00:09:45.791  #define SPDK_CONFIG_VTUNE_DIR 
00:09:45.791  #define SPDK_CONFIG_WERROR 1
00:09:45.791  #define SPDK_CONFIG_WPDK_DIR 
00:09:45.791  #undef SPDK_CONFIG_XNVME
00:09:45.791  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:09:45.791  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:09:45.791  ++ source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:09:45.791  +++ shopt -s extglob
00:09:45.791  +++ [[ -e /bin/wpdk_common.sh ]]
00:09:45.791  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:45.791  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:45.791  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:45.791  ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:45.791  ++++ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:45.791  ++++ export PATH
00:09:45.791  ++++ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:45.791  ++ source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/common
00:09:45.791  +++++ dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/common
00:09:45.791  ++++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm
00:09:45.791  +++ _pmdir=/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm
00:09:45.791  ++++ readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/../../../
00:09:45.791  +++ _pmrootdir=/var/jenkins/workspace/vhost-phy-autotest/spdk
00:09:45.791  +++ TEST_TAG=N/A
00:09:45.791  +++ TEST_TAG_FILE=/var/jenkins/workspace/vhost-phy-autotest/spdk/.run_test_name
00:09:45.791  +++ PM_OUTPUTDIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power
00:09:45.791  ++++ uname -s
00:09:45.791  +++ PM_OS=Linux
00:09:45.791  +++ MONITOR_RESOURCES_SUDO=()
00:09:45.791  +++ declare -A MONITOR_RESOURCES_SUDO
00:09:45.791  +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:09:45.791  +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:09:45.791  +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:09:45.791  +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:09:45.791  +++ SUDO[0]=
00:09:45.791  +++ SUDO[1]='sudo -E'
00:09:45.791  +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:09:45.791  +++ [[ Linux == FreeBSD ]]
00:09:45.791  +++ [[ Linux == Linux ]]
00:09:45.791  +++ [[ ............................... != QEMU ]]
00:09:45.791  +++ [[ ! -e /.dockerenv ]]
00:09:45.791  +++ MONITOR_RESOURCES+=(collect-cpu-temp)
00:09:45.791  +++ MONITOR_RESOURCES+=(collect-bmc-pm)
00:09:45.791  +++ [[ ! -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power ]]
00:09:45.791  ++ : 0
00:09:45.791  ++ export RUN_NIGHTLY
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_RUN_VALGRIND
00:09:45.791  ++ : 1
00:09:45.791  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_UNITTEST
00:09:45.791  ++ :
00:09:45.791  ++ export SPDK_TEST_AUTOBUILD
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_RELEASE_BUILD
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_ISAL
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_ISCSI
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_ISCSI_INITIATOR
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVME
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVME_PMR
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVME_BP
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVME_CLI
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVME_CUSE
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVME_FDP
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_NVMF
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_VFIOUSER
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_VFIOUSER_QEMU
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_FUZZER
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_FUZZER_SHORT
00:09:45.791  ++ : rdma
00:09:45.791  ++ export SPDK_TEST_NVMF_TRANSPORT
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_RBD
00:09:45.791  ++ : 1
00:09:45.791  ++ export SPDK_TEST_VHOST
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_BLOCKDEV
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_RAID
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_IOAT
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_BLOBFS
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_VHOST_INIT
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_LVOL
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_VBDEV_COMPRESS
00:09:45.791  ++ : 1
00:09:45.791  ++ export SPDK_RUN_ASAN
00:09:45.791  ++ : 1
00:09:45.791  ++ export SPDK_RUN_UBSAN
00:09:45.791  ++ :
00:09:45.791  ++ export SPDK_RUN_EXTERNAL_DPDK
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_RUN_NON_ROOT
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_CRYPTO
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_FTL
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_OCF
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_VMD
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_OPAL
00:09:45.791  ++ :
00:09:45.791  ++ export SPDK_TEST_NATIVE_DPDK
00:09:45.791  ++ : true
00:09:45.791  ++ export SPDK_AUTOTEST_X
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_URING
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_USDT
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_USE_IGB_UIO
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_SCHEDULER
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_SCANBUILD
00:09:45.791  ++ :
00:09:45.791  ++ export SPDK_TEST_NVMF_NICS
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_SMA
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_DAOS
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_XNVME
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_ACCEL
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_ACCEL_DSA
00:09:45.791  ++ : 0
00:09:45.791  ++ export SPDK_TEST_ACCEL_IAA
00:09:45.791  ++ :
00:09:45.791  ++ export SPDK_TEST_FUZZER_TARGET
00:09:45.792  ++ : 0
00:09:45.792  ++ export SPDK_TEST_NVMF_MDNS
00:09:45.792  ++ : 0
00:09:45.792  ++ export SPDK_JSONRPC_GO_CLIENT
00:09:45.792  ++ : 0
00:09:45.792  ++ export SPDK_TEST_SETUP
00:09:45.792  ++ : 0
00:09:45.792  ++ export SPDK_TEST_NVME_INTERRUPT
00:09:45.792  ++ export SPDK_LIB_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib
00:09:45.792  ++ SPDK_LIB_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib
00:09:45.792  ++ export DPDK_LIB_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib
00:09:45.792  ++ DPDK_LIB_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib
00:09:45.792  ++ export VFIO_LIB_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:45.792  ++ VFIO_LIB_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:45.792  ++ export LD_LIBRARY_PATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:45.792  ++ LD_LIBRARY_PATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/vhost-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:09:45.792  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:09:45.792  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:09:45.792  ++ export PYTHONPATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python
00:09:45.792  ++ PYTHONPATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python
00:09:45.792  ++ export PYTHONDONTWRITEBYTECODE=1
00:09:45.792  ++ PYTHONDONTWRITEBYTECODE=1
00:09:45.792  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:09:45.792  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:09:45.792  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:09:45.792  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:09:45.792  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:09:45.792  ++ rm -rf /var/tmp/asan_suppression_file
00:09:45.792  ++ cat
00:09:45.792  ++ echo leak:libfuse3.so
00:09:45.792  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:09:45.792  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:09:45.792  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:09:45.792  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:09:45.792  ++ '[' -z /var/spdk/dependencies ']'
00:09:45.792  ++ export DEPENDENCY_DIR
00:09:45.792  ++ export SPDK_BIN_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin
00:09:45.792  ++ SPDK_BIN_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin
00:09:45.792  ++ export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/examples
00:09:45.792  ++ SPDK_EXAMPLE_DIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/examples
00:09:45.792  ++ export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:45.792  ++ QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:45.792  ++ export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:45.792  ++ VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:45.792  ++ export AR_TOOL=/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:09:45.792  ++ AR_TOOL=/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:09:45.792  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:09:45.792  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:09:45.792  ++ _LCOV_MAIN=0
00:09:45.792  ++ _LCOV_LLVM=1
00:09:45.792  ++ _LCOV=
00:09:45.792  ++ [[ '' == *clang* ]]
00:09:45.792  ++ [[ 0 -eq 1 ]]
00:09:45.792  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/vhost-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:09:45.792  ++ _lcov_opt[_LCOV_MAIN]=
00:09:45.792  ++ lcov_opt=
00:09:45.792  ++ '[' 0 -eq 0 ']'
00:09:45.792  ++ export valgrind=
00:09:45.792  ++ valgrind=
00:09:45.792  +++ uname -s
00:09:45.792  ++ '[' Linux = Linux ']'
00:09:45.792  ++ HUGEMEM=4096
00:09:45.792  ++ export CLEAR_HUGE=yes
00:09:45.792  ++ CLEAR_HUGE=yes
00:09:45.792  ++ MAKE=make
00:09:45.792  +++ nproc
00:09:45.792  ++ MAKEFLAGS=-j72
00:09:45.792  ++ export HUGEMEM=4096
00:09:45.792  ++ HUGEMEM=4096
00:09:45.792  ++ NO_HUGE=()
00:09:45.792  ++ TEST_MODE=
00:09:45.792  ++ for i in "$@"
00:09:45.792  ++ case "$i" in
00:09:45.792  ++ [[ -z '' ]]
00:09:45.792  ++ PYTHONPATH+=:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins
00:09:45.792  ++ exec
00:09:45.792  ++ PYTHONPATH=:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vhost-phy-autotest/spdk/python:/var/jenkins/workspace/vhost-phy-autotest/spdk/test/rpc_plugins
00:09:45.792  ++ /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py --server
00:09:45.792  ++ set_test_storage 2147483648
00:09:45.792  ++ [[ -v testdir ]]
00:09:45.792  ++ local requested_size=2147483648
00:09:45.792  ++ local mount target_dir
00:09:45.792  ++ local -A mounts fss sizes avails uses
00:09:45.792  ++ local source fs size avail mount use
00:09:45.792  ++ local storage_fallback storage_candidates
00:09:45.792  +++ mktemp -udt spdk.XXXXXX
00:09:45.792  ++ storage_fallback=/tmp/spdk.cVNW2d
00:09:45.792  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:09:45.792  ++ [[ -n '' ]]
00:09:45.792  ++ [[ -n '' ]]
00:09:45.792  ++ mkdir -p /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot /tmp/spdk.cVNW2d/tests/vhost_boot /tmp/spdk.cVNW2d
00:09:45.792  ++ requested_size=2214592512
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  +++ df -T
00:09:45.792  +++ grep -v Filesystem
00:09:45.792  ++ mounts["$mount"]=spdk_devtmpfs
00:09:45.792  ++ fss["$mount"]=devtmpfs
00:09:45.792  ++ avails["$mount"]=67108864
00:09:45.792  ++ sizes["$mount"]=67108864
00:09:45.792  ++ uses["$mount"]=0
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ mounts["$mount"]=/dev/pmem0
00:09:45.792  ++ fss["$mount"]=ext2
00:09:45.792  ++ avails["$mount"]=4096
00:09:45.792  ++ sizes["$mount"]=5284429824
00:09:45.792  ++ uses["$mount"]=5284425728
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ mounts["$mount"]=spdk_root
00:09:45.792  ++ fss["$mount"]=overlay
00:09:45.792  ++ avails["$mount"]=50840010752
00:09:45.792  ++ sizes["$mount"]=61734395904
00:09:45.792  ++ uses["$mount"]=10894385152
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ mounts["$mount"]=tmpfs
00:09:45.792  ++ fss["$mount"]=tmpfs
00:09:45.792  ++ avails["$mount"]=30816608256
00:09:45.792  ++ sizes["$mount"]=30867197952
00:09:45.792  ++ uses["$mount"]=50589696
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ mounts["$mount"]=tmpfs
00:09:45.792  ++ fss["$mount"]=tmpfs
00:09:45.792  ++ avails["$mount"]=12340989952
00:09:45.792  ++ sizes["$mount"]=12346880000
00:09:45.792  ++ uses["$mount"]=5890048
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ mounts["$mount"]=tmpfs
00:09:45.792  ++ fss["$mount"]=tmpfs
00:09:45.792  ++ avails["$mount"]=30866800640
00:09:45.792  ++ sizes["$mount"]=30867197952
00:09:45.792  ++ uses["$mount"]=397312
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ mounts["$mount"]=tmpfs
00:09:45.792  ++ fss["$mount"]=tmpfs
00:09:45.792  ++ avails["$mount"]=6173425664
00:09:45.792  ++ sizes["$mount"]=6173437952
00:09:45.792  ++ uses["$mount"]=12288
00:09:45.792  ++ read -r source fs size use avail _ mount
00:09:45.792  ++ printf '* Looking for test storage...\n'
00:09:45.792  * Looking for test storage...
00:09:45.792  ++ local target_space new_size
00:09:45.792  ++ for target_dir in "${storage_candidates[@]}"
00:09:45.792  +++ df /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.792  +++ awk '$1 !~ /Filesystem/{print $6}'
00:09:45.792  ++ mount=/
00:09:45.792  ++ target_space=50840010752
00:09:45.792  ++ (( target_space == 0 || target_space < requested_size ))
00:09:45.792  ++ (( target_space >= requested_size ))
00:09:45.792  ++ [[ overlay == tmpfs ]]
00:09:45.792  ++ [[ overlay == ramfs ]]
00:09:45.792  ++ [[ / == / ]]
00:09:45.792  ++ new_size=13108977664
00:09:45.792  ++ (( new_size * 100 / sizes[/] > 95 ))
00:09:45.792  ++ export SPDK_TEST_STORAGE=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.793  ++ SPDK_TEST_STORAGE=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.793  ++ printf '* Found test storage at %s\n' /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.793  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:45.793  ++ return 0
00:09:45.793  ++ set -o errtrace
00:09:45.793  ++ shopt -s extdebug
00:09:45.793  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:09:45.793  ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1685 -- # true
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1687 -- # xtrace_fd
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@29 -- # exec
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@31 -- # xtrace_restore
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@18 -- # set -x
00:09:45.793    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:45.793     10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1693 -- # lcov --version
00:09:45.793     10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:46.052    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@336 -- # IFS=.-:
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@336 -- # read -ra ver1
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@337 -- # IFS=.-:
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@337 -- # read -ra ver2
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@338 -- # local 'op=<'
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@340 -- # ver1_l=2
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@341 -- # ver2_l=1
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@344 -- # case "$op" in
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@345 -- # : 1
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@365 -- # decimal 1
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@353 -- # local d=1
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@355 -- # echo 1
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@365 -- # ver1[v]=1
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@366 -- # decimal 2
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@353 -- # local d=2
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:46.052     10:37:35 vhost.vhost_boot -- scripts/common.sh@355 -- # echo 2
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@366 -- # ver2[v]=2
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:46.052    10:37:35 vhost.vhost_boot -- scripts/common.sh@368 -- # return 0
00:09:46.052    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:46.052    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:46.052  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.052  		--rc genhtml_branch_coverage=1
00:09:46.052  		--rc genhtml_function_coverage=1
00:09:46.052  		--rc genhtml_legend=1
00:09:46.052  		--rc geninfo_all_blocks=1
00:09:46.052  		--rc geninfo_unexecuted_blocks=1
00:09:46.052  		
00:09:46.052  		'
00:09:46.052    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:46.052  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.052  		--rc genhtml_branch_coverage=1
00:09:46.052  		--rc genhtml_function_coverage=1
00:09:46.052  		--rc genhtml_legend=1
00:09:46.052  		--rc geninfo_all_blocks=1
00:09:46.052  		--rc geninfo_unexecuted_blocks=1
00:09:46.052  		
00:09:46.052  		'
00:09:46.052    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:46.052  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.052  		--rc genhtml_branch_coverage=1
00:09:46.052  		--rc genhtml_function_coverage=1
00:09:46.052  		--rc genhtml_legend=1
00:09:46.052  		--rc geninfo_all_blocks=1
00:09:46.052  		--rc geninfo_unexecuted_blocks=1
00:09:46.052  		
00:09:46.052  		'
00:09:46.052    10:37:35 vhost.vhost_boot -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:46.052  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:46.052  		--rc genhtml_branch_coverage=1
00:09:46.052  		--rc genhtml_function_coverage=1
00:09:46.052  		--rc genhtml_legend=1
00:09:46.052  		--rc geninfo_all_blocks=1
00:09:46.052  		--rc geninfo_unexecuted_blocks=1
00:09:46.052  		
00:09:46.052  		'
00:09:46.052   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@11 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@6 -- # : false
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@7 -- # : /root/vhost_test
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@9 -- # : qemu-img
00:09:46.053     10:37:35 vhost.vhost_boot -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/..
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vhost-phy-autotest
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:09:46.053      10:37:35 vhost.vhost_boot -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/vhost_boot.sh
00:09:46.053     10:37:35 vhost.vhost_boot -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/autotest.config
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@2 -- # vhost_0_main_core=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:09:46.053     10:37:35 vhost.vhost_boot -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/common.sh
00:09:46.053     10:37:35 vhost.vhost_boot -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:09:46.053     10:37:35 vhost.vhost_boot -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:09:46.053     10:37:35 vhost.vhost_boot -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:09:46.053     10:37:35 vhost.vhost_boot -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler
00:09:46.053     10:37:35 vhost.vhost_boot -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:09:46.053     10:37:35 vhost.vhost_boot -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/cgroups.sh
00:09:46.053      10:37:35 vhost.vhost_boot -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:09:46.053       10:37:35 vhost.vhost_boot -- scheduler/cgroups.sh@244 -- # check_cgroup
00:09:46.053       10:37:35 vhost.vhost_boot -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:09:46.053       10:37:35 vhost.vhost_boot -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:09:46.053       10:37:35 vhost.vhost_boot -- scheduler/cgroups.sh@10 -- # echo 2
00:09:46.053      10:37:35 vhost.vhost_boot -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@12 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/bdev/nbd_common.sh
00:09:46.053    10:37:35 vhost.vhost_boot -- bdev/nbd_common.sh@6 -- # set -e
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@14 -- # get_vhost_dir 0
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@105 -- # local vhost_name=0
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@14 -- # rpc_py='/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@15 -- # vm_no=0
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@41 -- # getopts h-: optchar
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@42 -- # case "$optchar" in
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@44 -- # case "$OPTARG" in
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@45 -- # os_image=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@41 -- # getopts h-: optchar
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@54 -- # [[ 0 -ne 0 ]]
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@59 -- # [[ -z /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@64 -- # vhosttestinit
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@66 -- # trap 'err_clean "${FUNCNAME}" "${LINENO}"' ERR
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@67 -- # timing_enter start_vhost
00:09:46.053   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:46.053   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@68 -- # vhost_run -n 0
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@116 -- # local OPTIND
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@117 -- # local vhost_name
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@118 -- # local run_gen_nvme=true
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@119 -- # local vhost_bin=vhost
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@120 -- # vhost_args=()
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@120 -- # local vhost_args
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@121 -- # cmd=()
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@121 -- # local cmd
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@124 -- # case "$optchar" in
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@125 -- # vhost_name=0
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@137 -- # shift 2
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@139 -- # vhost_args=("$@")
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@141 -- # [[ -z 0 ]]
00:09:46.053   10:37:35 vhost.vhost_boot -- vhost/common.sh@146 -- # local vhost_dir
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@147 -- # get_vhost_dir 0
00:09:46.053    10:37:35 vhost.vhost_boot -- vhost/common.sh@105 -- # local vhost_name=0
00:09:46.054    10:37:35 vhost.vhost_boot -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:46.054    10:37:35 vhost.vhost_boot -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@147 -- # vhost_dir=/root/vhost_test/vhost/0
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@148 -- # local vhost_app=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@149 -- # local vhost_log_file=/root/vhost_test/vhost/0/vhost.log
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@150 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@151 -- # local vhost_socket=/root/vhost_test/vhost/0/usvhost
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@152 -- # notice 'starting vhost app in background'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'starting vhost app in background'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: starting vhost app in background'
00:09:46.054  INFO: starting vhost app in background
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@153 -- # [[ -r /root/vhost_test/vhost/0/vhost.pid ]]
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@154 -- # [[ -d /root/vhost_test/vhost/0 ]]
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@155 -- # mkdir -p /root/vhost_test/vhost/0
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@157 -- # [[ ! -x /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost ]]
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@162 -- # cmd=("$vhost_app" "-r" "$vhost_dir/rpc.sock" "${vhost_args[@]}")
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@163 -- # [[ vhost =~ vhost ]]
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@164 -- # cmd+=(-S "$vhost_dir")
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@167 -- # notice 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:09:46.054  INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@168 -- # notice 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: Socket:      /root/vhost_test/vhost/0/usvhost'
00:09:46.054  INFO: Socket:      /root/vhost_test/vhost/0/usvhost
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@169 -- # notice 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0'
00:09:46.054  INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@171 -- # timing_enter vhost_start
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@173 -- # iobuf_small_count=16383
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@174 -- # iobuf_large_count=2047
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@177 -- # vhost_pid=1861945
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@178 -- # echo 1861945
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@180 -- # notice 'waiting for app to run...'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'waiting for app to run...'
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: waiting for app to run...'
00:09:46.054  INFO: waiting for app to run...
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@181 -- # waitforlisten 1861945 /root/vhost_test/vhost/0/rpc.sock
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@835 -- # '[' -z 1861945 ']'
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:09:46.054   10:37:35 vhost.vhost_boot -- vhost/common.sh@176 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0 --wait-for-rpc
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:09:46.054  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:46.054   10:37:35 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:46.054  [2024-11-19 10:37:35.760640] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:09:46.054  [2024-11-19 10:37:35.760759] [ DPDK EAL parameters: vhost --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1861945 ]
00:09:46.054  EAL: No free 2048 kB hugepages reported on node 1
00:09:46.313  [2024-11-19 10:37:35.899244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:46.313  [2024-11-19 10:37:36.002219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:46.879   10:37:36 vhost.vhost_boot -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:46.879   10:37:36 vhost.vhost_boot -- common/autotest_common.sh@868 -- # return 0
00:09:46.879   10:37:36 vhost.vhost_boot -- vhost/common.sh@183 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock iobuf_set_options --small-pool-count=16383 --large-pool-count=2047
00:09:47.138   10:37:36 vhost.vhost_boot -- vhost/common.sh@188 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock framework_start_init
00:09:47.705   10:37:37 vhost.vhost_boot -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0 != *\-\-\n\o\-\p\c\i* ]]
00:09:47.705   10:37:37 vhost.vhost_boot -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock -S /root/vhost_test/vhost/0 != *\-\u* ]]
00:09:47.705   10:37:37 vhost.vhost_boot -- vhost/common.sh@192 -- # true
00:09:47.705   10:37:37 vhost.vhost_boot -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:47.705   10:37:37 vhost.vhost_boot -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@196 -- # notice 'vhost started - pid=1861945'
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'vhost started - pid=1861945'
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: vhost started - pid=1861945'
00:09:49.082  INFO: vhost started - pid=1861945
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost/common.sh@198 -- # timing_exit vhost_start
00:09:49.082   10:37:38 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:49.082   10:37:38 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@69 -- # timing_exit start_vhost
00:09:49.082   10:37:38 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:49.082   10:37:38 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:49.082   10:37:38 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@71 -- # timing_enter create_lvol
00:09:49.082   10:37:38 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:49.082   10:37:38 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:49.082    10:37:38 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@73 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_get_bdevs -b Nvme0n1
00:09:49.341   10:37:38 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@73 -- # nvme_bdev='[
00:09:49.341    {
00:09:49.341      "name": "Nvme0n1",
00:09:49.341      "aliases": [
00:09:49.341        "36344730-5260-5497-0025-38450000011d"
00:09:49.341      ],
00:09:49.341      "product_name": "NVMe disk",
00:09:49.341      "block_size": 512,
00:09:49.341      "num_blocks": 3750748848,
00:09:49.341      "uuid": "36344730-5260-5497-0025-38450000011d",
00:09:49.341      "numa_id": 0,
00:09:49.341      "assigned_rate_limits": {
00:09:49.341        "rw_ios_per_sec": 0,
00:09:49.341        "rw_mbytes_per_sec": 0,
00:09:49.341        "r_mbytes_per_sec": 0,
00:09:49.341        "w_mbytes_per_sec": 0
00:09:49.341      },
00:09:49.341      "claimed": false,
00:09:49.341      "zoned": false,
00:09:49.341      "supported_io_types": {
00:09:49.341        "read": true,
00:09:49.341        "write": true,
00:09:49.341        "unmap": true,
00:09:49.341        "flush": true,
00:09:49.341        "reset": true,
00:09:49.341        "nvme_admin": true,
00:09:49.341        "nvme_io": true,
00:09:49.341        "nvme_io_md": false,
00:09:49.341        "write_zeroes": true,
00:09:49.341        "zcopy": false,
00:09:49.341        "get_zone_info": false,
00:09:49.341        "zone_management": false,
00:09:49.341        "zone_append": false,
00:09:49.341        "compare": true,
00:09:49.341        "compare_and_write": false,
00:09:49.341        "abort": true,
00:09:49.341        "seek_hole": false,
00:09:49.341        "seek_data": false,
00:09:49.341        "copy": false,
00:09:49.341        "nvme_iov_md": false
00:09:49.341      },
00:09:49.341      "driver_specific": {
00:09:49.341        "nvme": [
00:09:49.341          {
00:09:49.341            "pci_address": "0000:5e:00.0",
00:09:49.341            "trid": {
00:09:49.341              "trtype": "PCIe",
00:09:49.341              "traddr": "0000:5e:00.0"
00:09:49.341            },
00:09:49.341            "ctrlr_data": {
00:09:49.341              "cntlid": 6,
00:09:49.341              "vendor_id": "0x144d",
00:09:49.341              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:09:49.341              "serial_number": "S64GNE0R605497",
00:09:49.341              "firmware_revision": "GDC5302Q",
00:09:49.341              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:09:49.341              "oacs": {
00:09:49.341                "security": 1,
00:09:49.341                "format": 1,
00:09:49.341                "firmware": 1,
00:09:49.341                "ns_manage": 1
00:09:49.341              },
00:09:49.341              "multi_ctrlr": false,
00:09:49.341              "ana_reporting": false
00:09:49.341            },
00:09:49.341            "vs": {
00:09:49.341              "nvme_version": "1.4"
00:09:49.341            },
00:09:49.341            "ns_data": {
00:09:49.341              "id": 1,
00:09:49.341              "can_share": false
00:09:49.341            },
00:09:49.341            "security": {
00:09:49.341              "opal": true
00:09:49.341            }
00:09:49.341          }
00:09:49.341        ],
00:09:49.341        "mp_policy": "active_passive"
00:09:49.341      }
00:09:49.341    }
00:09:49.341  ]'
00:09:49.341    10:37:38 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@74 -- # jq '.[] .block_size'
00:09:49.341   10:37:39 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@74 -- # nvme_bdev_bs=512
00:09:49.341    10:37:39 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@75 -- # jq '.[] .name'
00:09:49.341   10:37:39 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@75 -- # nvme_bdev_name='"Nvme0n1"'
00:09:49.341   10:37:39 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@76 -- # [[ 512 != 512 ]]
00:09:49.341   10:37:39 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@81 -- # lvb_size=20000
00:09:49.341    10:37:39 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@82 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create_lvstore Nvme0n1 lvs0
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@82 -- # lvs_u=cbe9c436-2af1-4950-9875-6dec3aabd711
00:09:50.723    10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@83 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create -u cbe9c436-2af1-4950-9875-6dec3aabd711 lvb0 20000
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@83 -- # lvb_u=4916491f-7e66-4bf3-95b6-3b5c7cc5278b
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@84 -- # timing_exit create_lvol
00:09:50.723   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:50.723   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@86 -- # timing_enter convert_vm_image
00:09:50.723   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:50.723   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@87 -- # modprobe nbd
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@88 -- # trap 'nbd_stop_disks $(get_vhost_dir 0)/rpc.sock /dev/nbd0; err_clean "${FUNCNAME}" "${LINENO}"' ERR
00:09:50.723    10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@89 -- # get_vhost_dir 0
00:09:50.723    10:37:40 vhost.vhost_boot -- vhost/common.sh@105 -- # local vhost_name=0
00:09:50.723    10:37:40 vhost.vhost_boot -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:50.723    10:37:40 vhost.vhost_boot -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:50.723   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@89 -- # nbd_start_disks /root/vhost_test/vhost/0/rpc.sock 4916491f-7e66-4bf3-95b6-3b5c7cc5278b /dev/nbd0
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@9 -- # local rpc_server=/root/vhost_test/vhost/0/rpc.sock
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@10 -- # bdev_list=('4916491f-7e66-4bf3-95b6-3b5c7cc5278b')
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@12 -- # local i
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:09:50.723   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nbd_start_disk 4916491f-7e66-4bf3-95b6-3b5c7cc5278b /dev/nbd0
00:09:50.983  /dev/nbd0
00:09:50.983    10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:50.983   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@873 -- # local i
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@877 -- # break
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/nbdtest bs=4096 count=1 iflag=direct
00:09:50.983  1+0 records in
00:09:50.983  1+0 records out
00:09:50.983  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254284 s, 16.1 MB/s
00:09:50.983    10:37:40 vhost.vhost_boot -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/nbdtest
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@890 -- # size=4096
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost_boot/nbdtest
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:50.983   10:37:40 vhost.vhost_boot -- common/autotest_common.sh@893 -- # return 0
00:09:50.983   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:50.983   10:37:40 vhost.vhost_boot -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:09:50.983   10:37:40 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@90 -- # qemu-img convert /var/spdk/dependencies/vhost/spdk_test_image.qcow2 -O raw /dev/nbd0
00:10:00.965   10:37:50 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@91 -- # sync
00:10:13.235    10:38:01 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@92 -- # get_vhost_dir 0
00:10:13.235    10:38:01 vhost.vhost_boot -- vhost/common.sh@105 -- # local vhost_name=0
00:10:13.235    10:38:01 vhost.vhost_boot -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:13.235    10:38:01 vhost.vhost_boot -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:13.235   10:38:01 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@92 -- # nbd_stop_disks /root/vhost_test/vhost/0/rpc.sock /dev/nbd0
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@49 -- # local rpc_server=/root/vhost_test/vhost/0/rpc.sock
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@51 -- # local i
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nbd_stop_disk /dev/nbd0
00:10:13.235    10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:13.235   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:13.236   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:13.236   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@41 -- # break
00:10:13.236   10:38:01 vhost.vhost_boot -- bdev/nbd_common.sh@45 -- # return 0
00:10:13.236   10:38:01 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@93 -- # sleep 1
00:10:13.236   10:38:02 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@94 -- # timing_exit convert_vm_image
00:10:13.236   10:38:02 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:13.236   10:38:02 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:13.236   10:38:02 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@96 -- # trap 'err_clean "${FUNCNAME}" "${LINENO}"' ERR
00:10:13.236   10:38:02 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@97 -- # timing_enter create_vhost_controller
00:10:13.236   10:38:02 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:13.236   10:38:02 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:13.236   10:38:02 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@98 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_scsi_controller naa.vhost_vm.0
00:10:13.236  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vhost-user server: socket created, fd: 316
00:10:13.236  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) binding succeeded
00:10:13.236   10:38:02 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@99 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.vhost_vm.0 0 4916491f-7e66-4bf3-95b6-3b5c7cc5278b
00:10:13.236  0
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@100 -- # timing_exit create_vhost_controller
00:10:13.494   10:38:03 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:13.494   10:38:03 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@102 -- # timing_enter setup_vm
00:10:13.494   10:38:03 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:13.494   10:38:03 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@103 -- # vm_setup --disk-type=spdk_vhost_scsi --force=0 --disks=vhost_vm --spdk-boot=vhost_vm
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@518 -- # xtrace_disable
00:10:13.494   10:38:03 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:13.494  INFO: Creating new VM in /root/vhost_test/vms/0
00:10:13.494  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:13.494  INFO: TASK MASK: 1-2
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@671 -- # local node_num=0
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:13.494  INFO: NUMA NODE: 0
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:13.494   10:38:03 vhost.vhost_boot -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@686 -- # [[ -z vhost_vm ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@701 -- # IFS=,
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@702 -- # disk_type=spdk_vhost_scsi
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@704 -- # case $disk_type in
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@723 -- # notice 'using socket /root/vhost_test/vhost/0/naa.vhost_vm.0'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vhost/0/naa.vhost_vm.0'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vhost/0/naa.vhost_vm.0'
00:10:13.495  INFO: using socket /root/vhost_test/vhost/0/naa.vhost_vm.0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@724 -- # cmd+=(-chardev "socket,id=char_$disk,path=$vhost_dir/naa.$disk.$vm_num")
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@725 -- # cmd+=(-device "vhost-user-scsi-pci,id=scsi_$disk,num_queues=$queue_number,chardev=char_$disk")
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@726 -- # [[ vhost_vm == \v\h\o\s\t\_\v\m ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@727 -- # cmd[-1]+=,bootindex=0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@728 -- # boot_disk_present=true
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@780 -- # [[ -n vhost_vm ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@780 -- # [[ true == false ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@785 -- # (( 0 ))
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:10:13.495  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@787 -- # cat
00:10:13.495    10:38:03 vhost.vhost_boot -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -chardev socket,id=char_vhost_vm,path=/root/vhost_test/vhost/0/naa.vhost_vm.0 -device vhost-user-scsi-pci,id=scsi_vhost_vm,num_queues=2,chardev=char_vhost_vm,bootindex=0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@827 -- # echo 10000
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@828 -- # echo 10001
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@829 -- # echo 10002
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@834 -- # echo 10004
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@835 -- # echo 100
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@104 -- # vm_run 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@843 -- # local run_all=false
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@844 -- # local vms_to_run=
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@856 -- # false
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@859 -- # shift 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@860 -- # for vm in "$@"
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@871 -- # vm_is_running 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@373 -- # return 1
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:10:13.495  INFO: running /root/vhost_test/vms/0/run.sh
00:10:13.495   10:38:03 vhost.vhost_boot -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:10:13.495  Running VM in /root/vhost_test/vms/0
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) new vhost user connection is 49
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) new device, handle is 0
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_FEATURES
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) negotiated Vhost-user protocol features: 0x11cbf
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_QUEUE_NUM
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_OWNER
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_FEATURES
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:0 file:320
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ERR
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:1 file:321
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ERR
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:2 file:322
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ERR
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:3 file:323
00:10:14.061  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ERR
00:10:14.061  Waiting for QEMU pid file
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_INFLIGHT_FD
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) get_inflight_fd num_queues: 4
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) get_inflight_fd queue_size: 128
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) send inflight mmap_size: 8448
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) send inflight mmap_offset: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) send inflight fd: 324
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_INFLIGHT_FD
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd mmap_size: 8448
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd mmap_offset: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd num_queues: 4
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd queue_size: 128
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd fd: 325
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd pervq_inflight_size: 2112
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_FEATURES
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) negotiated Virtio features: 0x140000000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_STATUS
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_STATUS
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) new device status(0x00000008):
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-RESET: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-ACKNOWLEDGE: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-DRIVER: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-FEATURES_OK: 1
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-DRIVER_OK: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-DEVICE_NEED_RESET: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	-FAILED: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_MEM_TABLE
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) guest memory region size: 0x40000000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 guest physical addr: 0x0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 guest virtual  addr: 0x7f0c8be00000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 host  virtual  addr: 0x7f22ca000000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 mmap addr : 0x7f22ca000000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 mmap size : 0x40000000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 mmap align: 0x200000
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) 	 mmap off  : 0x0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_NUM
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_BASE
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:2 last_used_idx:0 last_avail_idx:0.
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ADDR
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_KICK
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring kick idx:2 file:326
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 0
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 1
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 2
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 3
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:0 file:328
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:1 file:320
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:2 file:321
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:14.319  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:3 file:322
00:10:15.251  === qemu.log ===
00:10:15.251  === qemu.log ===
00:10:15.251   10:38:04 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@105 -- # vm_wait_for_boot 300 0
00:10:15.251   10:38:04 vhost.vhost_boot -- vhost/common.sh@913 -- # assert_number 300
00:10:15.251   10:38:04 vhost.vhost_boot -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:10:15.252   10:38:04 vhost.vhost_boot -- vhost/common.sh@281 -- # return 0
00:10:15.252   10:38:04 vhost.vhost_boot -- vhost/common.sh@915 -- # xtrace_disable
00:10:15.252   10:38:04 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:15.252  INFO: Waiting for VMs to boot
00:10:15.252  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 0
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 1
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 2
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 3
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_VRING_BASE
00:10:17.151  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:2 file:3242
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_INFLIGHT_FD
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) get_inflight_fd num_queues: 4
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) get_inflight_fd queue_size: 128
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) send inflight mmap_size: 8448
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) send inflight mmap_offset: 0
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) send inflight fd: 321
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_INFLIGHT_FD
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd mmap_size: 8448
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd mmap_offset: 0
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd num_queues: 4
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd queue_size: 128
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd fd: 323
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set_inflight_fd pervq_inflight_size: 2112
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_FEATURES
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) negotiated Virtio features: 0x150000006
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_STATUS
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_MEM_TABLE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) memory regions not changed
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_NUM
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_BASE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ADDR
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_KICK
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring kick idx:0 file:321
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_NUM
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_BASE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ADDR
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_KICK
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring kick idx:1 file:325
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_NUM
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_BASE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:2 last_used_idx:0 last_avail_idx:0.
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ADDR
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_KICK
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring kick idx:2 file:326
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_NUM
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_BASE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:3 last_used_idx:0 last_avail_idx:0.
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ADDR
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_KICK
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring kick idx:3 file:329
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 0
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 1
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 2
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 1 to qp idx: 3
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:0 file:330
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:1 file:328
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:2 file:320
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_CALL
00:10:18.085  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring call idx:3 file:331
00:10:18.085  [2024-11-19 10:38:07.545167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:10:24.642  
00:10:24.642  INFO: VM0 ready
00:10:24.642  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:24.642  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:24.901  INFO: all VMs ready
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost/common.sh@973 -- # return 0
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@106 -- # timing_exit setup_vm
00:10:24.901   10:38:14 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:24.901   10:38:14 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@108 -- # start_part_sector=0
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@108 -- # drive_size=0
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@108 -- # part_id=0
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@108 -- # pt_type=
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:24.901   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:24.901    10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@113 -- # vm_exec 0 'parted /dev/sda -ms unit s print'
00:10:24.901    10:38:14 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:24.901    10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:24.901    10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:24.901    10:38:14 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:24.901    10:38:14 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:24.901     10:38:14 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:25.160     10:38:14 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:25.160     10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:25.160     10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:25.160     10:38:14 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:25.160     10:38:14 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'parted /dev/sda -ms unit s print'
00:10:25.160  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:25.160  Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 30474240 blocks) or continue with the current setting? 
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ BYT; == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ BYT; =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=40960000
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=scsi
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ /dev/sda == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # drive_size=40960000
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # pt_type=gpt
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ /dev/sda =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=2048
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=4095
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ 1 == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # start_part_sector=4096
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # part_id=1
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=4096
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=2052095
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ 2 == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # start_part_sector=2052096
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # part_id=2
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=2052096
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=2256895
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ 3 == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ 3 =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # start_part_sector=2256896
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # part_id=3
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=2256896
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=2265087
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ 4 == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ 4 =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # start_part_sector=2265088
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # part_id=4
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # start=2265088
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@110 -- # end=10483711
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@111 -- # [[ 5 == /dev/sda ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # [[ 5 =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # start_part_sector=10483712
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@112 -- # part_id=5
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # IFS=:
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@109 -- # read -r id start end _ _ pt _
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@115 -- # (( part_id++ ))
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@119 -- # (( start_part_sector > 0 ))
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@119 -- # (( (drive_size * 512) >> 20 == lvb_size ))
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@122 -- # [[ gpt == gpt ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@122 -- # vm_exec 0 'sgdisk -e /dev/sda'
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:25.160    10:38:14 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:25.160   10:38:14 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'sgdisk -e /dev/sda'
00:10:25.419  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:26.353  Warning: The kernel is still using the old partition table.
00:10:26.353  The new table will be used at the next reboot or after you
00:10:26.353  run partprobe(8) or kpartx(8)
00:10:26.353  The operation has completed successfully.
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@124 -- # timing_enter run_vm_cmd
00:10:26.353   10:38:16 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:26.353   10:38:16 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@125 -- # vm_exec 0 'parted -s /dev/sda mkpart primary 10483712s 100%; sleep 1; partprobe'
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:26.353    10:38:16 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:26.353    10:38:16 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:26.353    10:38:16 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:26.353    10:38:16 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:26.353    10:38:16 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:26.353    10:38:16 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:26.353   10:38:16 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'parted -s /dev/sda mkpart primary 10483712s 100%; sleep 1; partprobe'
00:10:26.612  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:26.612  Warning: The resulting partition is not properly aligned for best performance: 10483712s % 8192s != 0s
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@126 -- # vm_exec 0 'mkfs.ext4 -F /dev/sda6; mkdir -p /mnt/sda6test; mount /dev/sda6 /mnt/sda6test;'
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:27.994    10:38:17 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:27.994    10:38:17 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:27.994    10:38:17 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:27.994    10:38:17 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:27.994    10:38:17 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:27.994    10:38:17 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:27.994   10:38:17 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'mkfs.ext4 -F /dev/sda6; mkdir -p /mnt/sda6test; mount /dev/sda6 /mnt/sda6test;'
00:10:27.994  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:27.994  mke2fs 1.46.5 (30-Dec-2021)
00:10:29.131  Discarding device blocks:       0/38095312097152/3809531               done                            
00:10:29.131  Creating filesystem with 3809531 4k blocks and 952848 inodes
00:10:29.131  Filesystem UUID: ee04337d-6c9f-4207-9530-a468e06d9884
00:10:29.131  Superblock backups stored on blocks: 
00:10:29.131  	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
00:10:29.131  
00:10:29.131  Allocating group tables:   0/117       done                            
00:10:29.131  Writing inode tables:   0/117       done                            
00:10:31.661  Creating journal (16384 blocks): done
00:10:31.661  Writing superblocks and filesystem accounting information:   0/117       done
00:10:31.661  
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@127 -- # vm_exec 0 'fio --name=integrity --bsrange=4k-512k --iodepth=128 --numjobs=1 --direct=1  --thread=1 --group_reporting=1 --rw=randrw --rwmixread=70 --filename=/mnt/sda6test/test_file  --verify=md5 --do_verify=1 --verify_backlog=1024 --fsync_on_close=1 --runtime=20  --time_based=1 --size=1024m --verify_state_save=0'
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:31.661    10:38:20 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:31.661    10:38:20 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:31.661    10:38:20 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:31.661    10:38:20 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:31.661    10:38:20 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:31.661    10:38:20 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:31.661   10:38:20 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'fio --name=integrity --bsrange=4k-512k --iodepth=128 --numjobs=1 --direct=1  --thread=1 --group_reporting=1 --rw=randrw --rwmixread=70 --filename=/mnt/sda6test/test_file  --verify=md5 --do_verify=1 --verify_backlog=1024 --fsync_on_close=1 --runtime=20  --time_based=1 --size=1024m --verify_state_save=0'
00:10:31.661  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:31.661  integrity: (g=0): rw=randrw, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=psync, iodepth=128
00:10:31.661  fio-3.28
00:10:31.661  Starting 1 thread
00:10:31.661  integrity: Laying out IO file (1 file / 1024MiB)
00:10:53.582  
00:10:53.582  integrity: (groupid=0, jobs=1): err= 0: pid=979: Tue Nov 19 09:38:42 2024
00:10:53.582    read: IOPS=2242, BW=385MiB/s (404MB/s)(7704MiB/20001msec)
00:10:53.582      clat (usec): min=95, max=90256, avg=208.68, stdev=1239.82
00:10:53.582       lat (usec): min=95, max=90256, avg=208.75, stdev=1239.82
00:10:53.582      clat percentiles (usec):
00:10:53.582       |  1.00th=[  105],  5.00th=[  117], 10.00th=[  123], 20.00th=[  133],
00:10:53.582       | 30.00th=[  145], 40.00th=[  157], 50.00th=[  174], 60.00th=[  192],
00:10:53.582       | 70.00th=[  210], 80.00th=[  237], 90.00th=[  269], 95.00th=[  289],
00:10:53.582       | 99.00th=[  351], 99.50th=[  400], 99.90th=[  562], 99.95th=[ 1106],
00:10:53.582       | 99.99th=[71828]
00:10:53.582     bw (  KiB/s): min=69640, max=523928, per=71.55%, avg=282213.33, stdev=121130.48, samples=39
00:10:53.582     iops        : min=  386, max= 2784, avg=1598.82, stdev=666.15, samples=39
00:10:53.582    write: IOPS=672, BW=115MiB/s (120MB/s)(2291MiB/20001msec); 0 zone resets
00:10:53.582      clat (usec): min=47, max=90719, avg=179.52, stdev=1972.03
00:10:53.582       lat (usec): min=55, max=90933, avg=501.99, stdev=1997.45
00:10:53.582      clat percentiles (usec):
00:10:53.582       |  1.00th=[   53],  5.00th=[   59], 10.00th=[   63], 20.00th=[   72],
00:10:53.582       | 30.00th=[   84], 40.00th=[   95], 50.00th=[  111], 60.00th=[  126],
00:10:53.582       | 70.00th=[  147], 80.00th=[  172], 90.00th=[  208], 95.00th=[  233],
00:10:53.582       | 99.00th=[  255], 99.50th=[  265], 99.90th=[11600], 99.95th=[63701],
00:10:53.582       | 99.99th=[83362]
00:10:53.582     bw (  KiB/s): min=26336, max=208128, per=100.00%, avg=119008.00, stdev=52087.87, samples=39
00:10:53.582     iops        : min=  150, max= 1174, avg=682.67, stdev=292.68, samples=39
00:10:53.582    lat (usec)   : 50=0.03%, 100=10.23%, 250=77.55%, 500=12.05%, 750=0.06%
00:10:53.582    lat (usec)   : 1000=0.01%
00:10:53.582    lat (msec)   : 2=0.01%, 10=0.01%, 20=0.02%, 50=0.01%, 100=0.04%
00:10:53.582    cpu          : usr=41.22%, sys=2.90%, ctx=58444, majf=0, minf=25
00:10:53.582    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:10:53.582       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:53.582       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:53.582       issued rwts: total=44855,13460,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:53.582       latency   : target=0, window=0, percentile=100.00%, depth=128
00:10:53.582  
00:10:53.582  Run status group 0 (all jobs):
00:10:53.582     READ: bw=385MiB/s (404MB/s), 385MiB/s-385MiB/s (404MB/s-404MB/s), io=7704MiB (8078MB), run=20001-20001msec
00:10:53.582    WRITE: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=2291MiB (2402MB), run=20001-20001msec
00:10:53.582  
00:10:53.582  Disk stats (read/write):
00:10:53.582    sda: ios=44375/13506, merge=3/20, ticks=8740/4104, in_queue=12844, util=98.99%
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@131 -- # vm_exec 0 'umount /mnt/sda6test; rm -rf /mnt/sda6test'
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'umount /mnt/sda6test; rm -rf /mnt/sda6test'
00:10:53.582  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@132 -- # vm_exec 0 'cat /sys/block/sda/sda1/alignment_offset'
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:53.582     10:38:42 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:53.582     10:38:42 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:53.582     10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.582     10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.582     10:38:42 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:53.582     10:38:42 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat /sys/block/sda/sda1/alignment_offset'
00:10:53.582  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@132 -- # alignment_offset=0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@133 -- # echo 'alignment_offset: 0'
00:10:53.582  alignment_offset: 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@134 -- # timing_exit run_vm_cmd
00:10:53.582   10:38:42 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:53.582   10:38:42 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@136 -- # vm_shutdown_all
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@489 -- # vm_list_all
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@466 -- # vms=()
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@466 -- # local vms
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@492 -- # vm_shutdown 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@424 -- # vm_is_running 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@376 -- # local vm_pid
00:10:53.582    10:38:42 vhost.vhost_boot -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@377 -- # vm_pid=1865792
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@379 -- # /bin/kill -0 1865792
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@380 -- # return 0
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:53.582   10:38:42 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:10:53.583  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@432 -- # set +e
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@338 -- # local vm_num=0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@339 -- # shift
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:53.583  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:10:53.583  INFO: VM0 is shutting down - wait a while to complete
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@435 -- # set -e
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:10:53.583  INFO: Waiting for VMs to shutdown...
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@498 -- # vm_is_running 0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@376 -- # local vm_pid
00:10:53.583    10:38:42 vhost.vhost_boot -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@377 -- # vm_pid=1865792
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@379 -- # /bin/kill -0 1865792
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@380 -- # return 0
00:10:53.583   10:38:42 vhost.vhost_boot -- vhost/common.sh@500 -- # sleep 1
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 0
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 1
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 2
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_SET_VRING_ENABLE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) set queue enable: 0 to qp idx: 3
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_VRING_BASE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:0 file:0
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_VRING_BASE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:1 file:0
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_VRING_BASE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:2 file:38074
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) read message VHOST_USER_GET_VRING_BASE
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vring base idx:3 file:24982
00:10:53.906  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.vhost_vm.0) vhost peer closed
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@498 -- # vm_is_running 0
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@309 -- # return 0
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@373 -- # return 1
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:10:54.164   10:38:43 vhost.vhost_boot -- vhost/common.sh@500 -- # sleep 1
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:10:55.097  INFO: All VMs successfully shut down
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost/common.sh@505 -- # return 0
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@138 -- # timing_enter clean_vhost
00:10:55.097   10:38:44 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:55.097   10:38:44 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:55.097   10:38:44 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.vhost_vm.0 0
00:10:55.355   10:38:44 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@140 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_delete_controller naa.vhost_vm.0
00:10:55.613   10:38:45 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@141 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_delete 4916491f-7e66-4bf3-95b6-3b5c7cc5278b
00:10:55.613   10:38:45 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@142 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_delete_lvstore -u cbe9c436-2af1-4950-9875-6dec3aabd711
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@143 -- # timing_exit clean_vhost
00:10:55.871   10:38:45 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:55.871   10:38:45 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@145 -- # vhost_kill 0
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@202 -- # local rc=0
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@203 -- # local vhost_name=0
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@210 -- # local vhost_dir
00:10:55.871    10:38:45 vhost.vhost_boot -- vhost/common.sh@211 -- # get_vhost_dir 0
00:10:55.871    10:38:45 vhost.vhost_boot -- vhost/common.sh@105 -- # local vhost_name=0
00:10:55.871    10:38:45 vhost.vhost_boot -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:55.871    10:38:45 vhost.vhost_boot -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:10:55.871   10:38:45 vhost.vhost_boot -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:55.871   10:38:45 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@220 -- # local vhost_pid
00:10:55.871    10:38:45 vhost.vhost_boot -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@221 -- # vhost_pid=1861945
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1861945) app'
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1861945) app'
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1861945) app'
00:10:55.871  INFO: killing vhost (PID 1861945) app
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@224 -- # kill -INT 1861945
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@60 -- # local verbose_out
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@61 -- # false
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@62 -- # verbose_out=
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@70 -- # shift
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:10:55.871  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i = 0 ))
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@227 -- # kill -0 1861945
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@228 -- # echo .
00:10:55.871  .
00:10:55.871   10:38:45 vhost.vhost_boot -- vhost/common.sh@229 -- # sleep 1
00:10:57.244   10:38:46 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i++ ))
00:10:57.245   10:38:46 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:57.245   10:38:46 vhost.vhost_boot -- vhost/common.sh@227 -- # kill -0 1861945
00:10:57.245   10:38:46 vhost.vhost_boot -- vhost/common.sh@228 -- # echo .
00:10:57.245  .
00:10:57.245   10:38:46 vhost.vhost_boot -- vhost/common.sh@229 -- # sleep 1
00:10:58.179   10:38:47 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i++ ))
00:10:58.179   10:38:47 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:58.179   10:38:47 vhost.vhost_boot -- vhost/common.sh@227 -- # kill -0 1861945
00:10:58.179   10:38:47 vhost.vhost_boot -- vhost/common.sh@228 -- # echo .
00:10:58.179  .
00:10:58.179   10:38:47 vhost.vhost_boot -- vhost/common.sh@229 -- # sleep 1
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i++ ))
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@226 -- # (( i < 60 ))
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@227 -- # kill -0 1861945
00:10:59.114  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1861945) - No such process
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@231 -- # break
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@234 -- # kill -0 1861945
00:10:59.114  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1861945) - No such process
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@239 -- # kill -0 1861945
00:10:59.114  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1861945) - No such process
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@245 -- # is_pid_child 1861945
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1668 -- # local pid=1861945 _pid
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1670 -- # read -r _pid
00:10:59.114    10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1667 -- # jobs -pr
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1670 -- # read -r _pid
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1674 -- # return 1
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@261 -- # return 0
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost_boot/vhost_boot.sh@147 -- # vhosttestfini
00:10:59.114   10:38:48 vhost.vhost_boot -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:10:59.114  
00:10:59.114  real	1m13.287s
00:10:59.114  user	1m13.720s
00:10:59.114  sys	0m17.497s
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:59.114   10:38:48 vhost.vhost_boot -- common/autotest_common.sh@10 -- # set +x
00:10:59.114  ************************************
00:10:59.114  END TEST vhost_boot
00:10:59.114  ************************************
00:10:59.114   10:38:48 vhost -- vhost/vhost.sh@25 -- # '[' 0 -eq 1 ']'
00:10:59.114   10:38:48 vhost -- vhost/vhost.sh@60 -- # echo 'Running lvol integrity suite...'
00:10:59.114  Running lvol integrity suite...
00:10:59.114   10:38:48 vhost -- vhost/vhost.sh@61 -- # run_test vhost_scsi_lvol_integrity /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh -x --fio-bin=/usr/src/fio-static/fio --ctrl-type=spdk_vhost_scsi --thin-provisioning
00:10:59.114   10:38:48 vhost -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:10:59.114   10:38:48 vhost -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:59.114   10:38:48 vhost -- common/autotest_common.sh@10 -- # set +x
00:10:59.114  ************************************
00:10:59.114  START TEST vhost_scsi_lvol_integrity
00:10:59.114  ************************************
00:10:59.114   10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh -x --fio-bin=/usr/src/fio-static/fio --ctrl-type=spdk_vhost_scsi --thin-provisioning
00:10:59.114  * Looking for test storage...
00:10:59.114  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol
00:10:59.114    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:59.114     10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1693 -- # lcov --version
00:10:59.114     10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@336 -- # IFS=.-:
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@336 -- # read -ra ver1
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@337 -- # IFS=.-:
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@337 -- # read -ra ver2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@338 -- # local 'op=<'
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@340 -- # ver1_l=2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@341 -- # ver2_l=1
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@344 -- # case "$op" in
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@345 -- # : 1
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@365 -- # decimal 1
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@353 -- # local d=1
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@355 -- # echo 1
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@365 -- # ver1[v]=1
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@366 -- # decimal 2
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@353 -- # local d=2
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@355 -- # echo 2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@366 -- # ver2[v]=2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@368 -- # return 0
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:59.374  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.374  		--rc genhtml_branch_coverage=1
00:10:59.374  		--rc genhtml_function_coverage=1
00:10:59.374  		--rc genhtml_legend=1
00:10:59.374  		--rc geninfo_all_blocks=1
00:10:59.374  		--rc geninfo_unexecuted_blocks=1
00:10:59.374  		
00:10:59.374  		'
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:59.374  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.374  		--rc genhtml_branch_coverage=1
00:10:59.374  		--rc genhtml_function_coverage=1
00:10:59.374  		--rc genhtml_legend=1
00:10:59.374  		--rc geninfo_all_blocks=1
00:10:59.374  		--rc geninfo_unexecuted_blocks=1
00:10:59.374  		
00:10:59.374  		'
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:59.374  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.374  		--rc genhtml_branch_coverage=1
00:10:59.374  		--rc genhtml_function_coverage=1
00:10:59.374  		--rc genhtml_legend=1
00:10:59.374  		--rc geninfo_all_blocks=1
00:10:59.374  		--rc geninfo_unexecuted_blocks=1
00:10:59.374  		
00:10:59.374  		'
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:59.374  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:59.374  		--rc genhtml_branch_coverage=1
00:10:59.374  		--rc genhtml_function_coverage=1
00:10:59.374  		--rc genhtml_legend=1
00:10:59.374  		--rc geninfo_all_blocks=1
00:10:59.374  		--rc geninfo_unexecuted_blocks=1
00:10:59.374  		
00:10:59.374  		'
00:10:59.374   10:38:48 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@6 -- # : false
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@7 -- # : /root/vhost_test
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@9 -- # : qemu-img
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/..
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vhost-phy-autotest
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:10:59.374    10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:10:59.374      10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh
00:10:59.374     10:38:48 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol
00:10:59.374    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol
00:10:59.374    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:10:59.374    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:10:59.374    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:10:59.374    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:10:59.374    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/autotest.config
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@2 -- # vhost_0_main_core=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:10:59.374     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/common.sh
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/cgroups.sh
00:10:59.375      10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:10:59.375       10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/cgroups.sh@244 -- # check_cgroup
00:10:59.375       10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:10:59.375       10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:10:59.375       10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/cgroups.sh@10 -- # echo 2
00:10:59.375      10:38:49 vhost.vhost_scsi_lvol_integrity -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@10 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@15 -- # shopt -s extglob
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- paths/export.sh@5 -- # export PATH
00:10:59.375     10:38:49 vhost.vhost_scsi_lvol_integrity -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@12 -- # get_vhost_dir 0
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=0
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@12 -- # rpc_py='/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@14 -- # vm_count=1
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@15 -- # ctrl_type=spdk_vhost_scsi
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@16 -- # use_fs=false
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@17 -- # distribute_cores=false
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@71 -- # set -x
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@72 -- # x=-x
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@59 -- # case "$OPTARG" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@61 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@59 -- # case "$OPTARG" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@63 -- # ctrl_type=spdk_vhost_scsi
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@59 -- # case "$OPTARG" in
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@65 -- # thin=' -t '
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@78 -- # vhosttestinit
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:10:59.375   10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@81 -- # source /dev/fd/62
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@81 -- # gen_cpu_vm_spdk_config 1 2 4 '' 0
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1401 -- # local vm_count=1 vm_cpu_num=2 vm
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1402 -- # local spdk_cpu_num=4 spdk_cpu_list= spdk_cpus
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1403 -- # nodes=('0')
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1403 -- # local nodes node
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1404 -- # local env
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1406 -- # spdk_cpus=spdk_cpu_num
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1407 -- # [[ -n '' ]]
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1409 -- # (( 1 > 0 ))
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1410 -- # (( 1 == 1 ))
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1410 -- # node=0
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1411 -- # (( vm = 0 ))
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1411 -- # (( vm < vm_count ))
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1412 -- # env+=("VM${vm}_NODE=${nodes[vm]:-$node}")
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1411 -- # (( vm++ ))
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1411 -- # (( vm < vm_count ))
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1416 -- # env+=("$spdk_cpus=${!spdk_cpus}")
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1417 -- # env+=("vm_count=$vm_count")
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1418 -- # env+=("vm_cpu_num=$vm_cpu_num")
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1420 -- # export VM0_NODE=0 spdk_cpu_num=4 vm_count=1 vm_cpu_num=2
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1420 -- # VM0_NODE=0
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1420 -- # spdk_cpu_num=4
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1420 -- # vm_count=1
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1420 -- # vm_cpu_num=2
00:10:59.375    10:38:49 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1422 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/vhost/conf-generator -p cpu
00:11:02.663  Requested number of SPDK CPUs allocated: 4
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- fd/62@4 -- # VM_0_qemu_mask=0,1
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- fd/62@5 -- # VM_0_qemu_numa_node=0
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- fd/62@6 -- # vhost_0_reactor_mask='[2,3,4,5]'
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- fd/62@7 -- # vhost_0_main_core=2
00:11:02.663   10:38:52 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@82 -- # spdk_mask='[2,3,4,5]'
00:11:02.663   10:38:52 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@84 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' SIGTERM SIGABRT ERR
00:11:02.663   10:38:52 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@86 -- # vm_kill_all
00:11:02.663   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@476 -- # local vm
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@477 -- # vm_list_all
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@466 -- # vms=()
00:11:02.663    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@466 -- # local vms
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@478 -- # vm_kill 0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@446 -- # return 0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@88 -- # notice 'running SPDK vhost'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'running SPDK vhost'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: running SPDK vhost'
00:11:02.664  INFO: running SPDK vhost
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@89 -- # vhost_run -n 0 -- --cpumask '[2,3,4,5]'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@116 -- # local OPTIND
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@117 -- # local vhost_name
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@118 -- # local run_gen_nvme=true
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@119 -- # local vhost_bin=vhost
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@120 -- # vhost_args=()
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@120 -- # local vhost_args
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@121 -- # cmd=()
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@121 -- # local cmd
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@124 -- # case "$optchar" in
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@125 -- # vhost_name=0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@137 -- # shift 3
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@139 -- # vhost_args=("$@")
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@141 -- # [[ -z 0 ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@146 -- # local vhost_dir
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@147 -- # get_vhost_dir 0
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=0
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:02.664    10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@147 -- # vhost_dir=/root/vhost_test/vhost/0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@148 -- # local vhost_app=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@149 -- # local vhost_log_file=/root/vhost_test/vhost/0/vhost.log
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@150 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@151 -- # local vhost_socket=/root/vhost_test/vhost/0/usvhost
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@152 -- # notice 'starting vhost app in background'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'starting vhost app in background'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: starting vhost app in background'
00:11:02.664  INFO: starting vhost app in background
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@153 -- # [[ -r /root/vhost_test/vhost/0/vhost.pid ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@154 -- # [[ -d /root/vhost_test/vhost/0 ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@155 -- # mkdir -p /root/vhost_test/vhost/0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@157 -- # [[ ! -x /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@162 -- # cmd=("$vhost_app" "-r" "$vhost_dir/rpc.sock" "${vhost_args[@]}")
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@163 -- # [[ vhost =~ vhost ]]
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@164 -- # cmd+=(-S "$vhost_dir")
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@167 -- # notice 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:11:02.664  INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@168 -- # notice 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Socket:      /root/vhost_test/vhost/0/usvhost'
00:11:02.664  INFO: Socket:      /root/vhost_test/vhost/0/usvhost
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@169 -- # notice 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0'
00:11:02.664  INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@171 -- # timing_enter vhost_start
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@173 -- # iobuf_small_count=16383
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@174 -- # iobuf_large_count=2047
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@177 -- # vhost_pid=1873307
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@178 -- # echo 1873307
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@176 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask '[2,3,4,5]' -S /root/vhost_test/vhost/0 --wait-for-rpc
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@180 -- # notice 'waiting for app to run...'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'waiting for app to run...'
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:02.664   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: waiting for app to run...'
00:11:02.665  INFO: waiting for app to run...
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@181 -- # waitforlisten 1873307 /root/vhost_test/vhost/0/rpc.sock
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@835 -- # '[' -z 1873307 ']'
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:11:02.665  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:02.665   10:38:52 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:02.665  [2024-11-19 10:38:52.371106] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:11:02.665  [2024-11-19 10:38:52.371225] [ DPDK EAL parameters: vhost --no-shconf -l 2,3,4,5 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873307 ]
00:11:02.665  EAL: No free 2048 kB hugepages reported on node 1
00:11:02.924  [2024-11-19 10:38:52.505790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:02.924  [2024-11-19 10:38:52.614226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:02.924  [2024-11-19 10:38:52.614345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:11:02.924  [2024-11-19 10:38:52.614401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:02.924  [2024-11-19 10:38:52.614429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:11:03.491   10:38:53 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:03.491   10:38:53 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@868 -- # return 0
00:11:03.491   10:38:53 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@183 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock iobuf_set_options --small-pool-count=16383 --large-pool-count=2047
00:11:03.750   10:38:53 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@188 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock framework_start_init
00:11:04.318   10:38:54 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0 != *\-\-\n\o\-\p\c\i* ]]
00:11:04.318   10:38:54 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0 != *\-\u* ]]
00:11:04.318   10:38:54 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@192 -- # true
00:11:04.318   10:38:54 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh
00:11:04.318   10:38:54 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@196 -- # notice 'vhost started - pid=1873307'
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'vhost started - pid=1873307'
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: vhost started - pid=1873307'
00:11:05.697  INFO: vhost started - pid=1873307
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@198 -- # timing_exit vhost_start
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@90 -- # notice ...
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO ...
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: ...'
00:11:05.697  INFO: ...
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@92 -- # trap 'clean_lvol_cfg; error_exit "${FUNCNAME}" "${LINENO}"' SIGTERM SIGABRT ERR
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@94 -- # lvol_bdevs=()
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@95 -- # used_vms=
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@97 -- # id=0
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@99 -- # notice 'Creating lvol store on device Nvme0n1'
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Creating lvol store on device Nvme0n1'
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:05.697   10:38:55 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Creating lvol store on device Nvme0n1'
00:11:05.697  INFO: Creating lvol store on device Nvme0n1
00:11:05.697    10:38:55 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@100 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create_lvstore Nvme0n1 lvs_0 -c 4194304
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@100 -- # ls_guid=48a3e668-55aa-4c49-846a-57de4b525423
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j = 0 ))
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j < vm_count ))
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@103 -- # notice 'Creating lvol bdev for VM 0 on lvol store 48a3e668-55aa-4c49-846a-57de4b525423'
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Creating lvol bdev for VM 0 on lvol store 48a3e668-55aa-4c49-846a-57de4b525423'
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:06.634   10:38:56 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Creating lvol bdev for VM 0 on lvol store 48a3e668-55aa-4c49-846a-57de4b525423'
00:11:06.634  INFO: Creating lvol bdev for VM 0 on lvol store 48a3e668-55aa-4c49-846a-57de4b525423
00:11:06.634    10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@104 -- # get_lvs_free_mb 48a3e668-55aa-4c49-846a-57de4b525423
00:11:06.634    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1368 -- # local lvs_uuid=48a3e668-55aa-4c49-846a-57de4b525423
00:11:06.634    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1369 -- # local lvs_info
00:11:06.634    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1370 -- # local fc
00:11:06.634    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1371 -- # local cs
00:11:06.634     10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_get_lvstores
00:11:06.894    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1372 -- # lvs_info='[
00:11:06.894    {
00:11:06.894      "uuid": "48a3e668-55aa-4c49-846a-57de4b525423",
00:11:06.894      "name": "lvs_0",
00:11:06.894      "base_bdev": "Nvme0n1",
00:11:06.894      "total_data_clusters": 457407,
00:11:06.894      "free_clusters": 457407,
00:11:06.894      "block_size": 512,
00:11:06.894      "cluster_size": 4194304
00:11:06.894    }
00:11:06.894  ]'
00:11:06.894     10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="48a3e668-55aa-4c49-846a-57de4b525423") .free_clusters'
00:11:06.894    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1373 -- # fc=457407
00:11:06.894     10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="48a3e668-55aa-4c49-846a-57de4b525423") .cluster_size'
00:11:07.153    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1374 -- # cs=4194304
00:11:07.153    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1377 -- # free_mb=1829628
00:11:07.153    10:38:56 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1378 -- # echo 1829628
00:11:07.153   10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@104 -- # free_mb=1829628
00:11:07.153   10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@105 -- # size=1829628
00:11:07.154    10:38:56 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@106 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create -u 48a3e668-55aa-4c49-846a-57de4b525423 lbd_vm_0 1829628 -t
00:11:09.064   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@106 -- # lb_name=6e03160c-5372-407f-a809-79382c3da070
00:11:09.064   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@107 -- # lvol_bdevs+=("$lb_name")
00:11:09.064   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j++ ))
00:11:09.064   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j < vm_count ))
00:11:09.064    10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@110 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_get_bdevs
00:11:09.064   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@110 -- # bdev_info='[
00:11:09.064    {
00:11:09.064      "name": "Nvme0n1",
00:11:09.064      "aliases": [
00:11:09.064        "36344730-5260-5497-0025-38450000011d"
00:11:09.064      ],
00:11:09.064      "product_name": "NVMe disk",
00:11:09.064      "block_size": 512,
00:11:09.064      "num_blocks": 3750748848,
00:11:09.064      "uuid": "36344730-5260-5497-0025-38450000011d",
00:11:09.064      "numa_id": 0,
00:11:09.064      "assigned_rate_limits": {
00:11:09.064        "rw_ios_per_sec": 0,
00:11:09.064        "rw_mbytes_per_sec": 0,
00:11:09.064        "r_mbytes_per_sec": 0,
00:11:09.064        "w_mbytes_per_sec": 0
00:11:09.064      },
00:11:09.064      "claimed": true,
00:11:09.064      "claim_type": "read_many_write_one",
00:11:09.064      "zoned": false,
00:11:09.064      "supported_io_types": {
00:11:09.064        "read": true,
00:11:09.064        "write": true,
00:11:09.064        "unmap": true,
00:11:09.064        "flush": true,
00:11:09.064        "reset": true,
00:11:09.064        "nvme_admin": true,
00:11:09.064        "nvme_io": true,
00:11:09.064        "nvme_io_md": false,
00:11:09.064        "write_zeroes": true,
00:11:09.064        "zcopy": false,
00:11:09.064        "get_zone_info": false,
00:11:09.064        "zone_management": false,
00:11:09.064        "zone_append": false,
00:11:09.064        "compare": true,
00:11:09.064        "compare_and_write": false,
00:11:09.064        "abort": true,
00:11:09.064        "seek_hole": false,
00:11:09.064        "seek_data": false,
00:11:09.064        "copy": false,
00:11:09.064        "nvme_iov_md": false
00:11:09.064      },
00:11:09.064      "driver_specific": {
00:11:09.064        "nvme": [
00:11:09.064          {
00:11:09.064            "pci_address": "0000:5e:00.0",
00:11:09.064            "trid": {
00:11:09.064              "trtype": "PCIe",
00:11:09.064              "traddr": "0000:5e:00.0"
00:11:09.064            },
00:11:09.064            "ctrlr_data": {
00:11:09.064              "cntlid": 6,
00:11:09.064              "vendor_id": "0x144d",
00:11:09.064              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:11:09.064              "serial_number": "S64GNE0R605497",
00:11:09.064              "firmware_revision": "GDC5302Q",
00:11:09.064              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:11:09.064              "oacs": {
00:11:09.064                "security": 1,
00:11:09.065                "format": 1,
00:11:09.065                "firmware": 1,
00:11:09.065                "ns_manage": 1
00:11:09.065              },
00:11:09.065              "multi_ctrlr": false,
00:11:09.065              "ana_reporting": false
00:11:09.065            },
00:11:09.065            "vs": {
00:11:09.065              "nvme_version": "1.4"
00:11:09.065            },
00:11:09.065            "ns_data": {
00:11:09.065              "id": 1,
00:11:09.065              "can_share": false
00:11:09.065            },
00:11:09.065            "security": {
00:11:09.065              "opal": true
00:11:09.065            }
00:11:09.065          }
00:11:09.065        ],
00:11:09.065        "mp_policy": "active_passive"
00:11:09.065      }
00:11:09.065    },
00:11:09.065    {
00:11:09.065      "name": "Nvme1n1",
00:11:09.065      "aliases": [
00:11:09.065        "bf9e8a9c-07a7-4245-83d1-2e91afd7063e"
00:11:09.065      ],
00:11:09.065      "product_name": "NVMe disk",
00:11:09.065      "block_size": 512,
00:11:09.065      "num_blocks": 732585168,
00:11:09.065      "uuid": "bf9e8a9c-07a7-4245-83d1-2e91afd7063e",
00:11:09.065      "numa_id": 1,
00:11:09.065      "assigned_rate_limits": {
00:11:09.065        "rw_ios_per_sec": 0,
00:11:09.065        "rw_mbytes_per_sec": 0,
00:11:09.065        "r_mbytes_per_sec": 0,
00:11:09.065        "w_mbytes_per_sec": 0
00:11:09.065      },
00:11:09.065      "claimed": false,
00:11:09.065      "zoned": false,
00:11:09.065      "supported_io_types": {
00:11:09.065        "read": true,
00:11:09.065        "write": true,
00:11:09.065        "unmap": true,
00:11:09.065        "flush": true,
00:11:09.065        "reset": true,
00:11:09.065        "nvme_admin": true,
00:11:09.065        "nvme_io": true,
00:11:09.065        "nvme_io_md": false,
00:11:09.065        "write_zeroes": true,
00:11:09.065        "zcopy": false,
00:11:09.065        "get_zone_info": false,
00:11:09.065        "zone_management": false,
00:11:09.065        "zone_append": false,
00:11:09.065        "compare": false,
00:11:09.065        "compare_and_write": false,
00:11:09.065        "abort": true,
00:11:09.065        "seek_hole": false,
00:11:09.065        "seek_data": false,
00:11:09.065        "copy": false,
00:11:09.065        "nvme_iov_md": false
00:11:09.065      },
00:11:09.065      "driver_specific": {
00:11:09.065        "nvme": [
00:11:09.065          {
00:11:09.065            "pci_address": "0000:af:00.0",
00:11:09.065            "trid": {
00:11:09.065              "trtype": "PCIe",
00:11:09.065              "traddr": "0000:af:00.0"
00:11:09.065            },
00:11:09.065            "ctrlr_data": {
00:11:09.065              "cntlid": 0,
00:11:09.065              "vendor_id": "0x8086",
00:11:09.065              "model_number": "INTEL SSDPED1K375GA",
00:11:09.065              "serial_number": "PHKS7481000F375AGN",
00:11:09.065              "firmware_revision": "E2010600",
00:11:09.065              "oacs": {
00:11:09.065                "security": 1,
00:11:09.065                "format": 1,
00:11:09.065                "firmware": 1,
00:11:09.065                "ns_manage": 0
00:11:09.065              },
00:11:09.065              "multi_ctrlr": false,
00:11:09.065              "ana_reporting": false
00:11:09.065            },
00:11:09.065            "vs": {
00:11:09.065              "nvme_version": "1.0"
00:11:09.065            },
00:11:09.065            "ns_data": {
00:11:09.065              "id": 1,
00:11:09.065              "can_share": false
00:11:09.065            },
00:11:09.065            "security": {
00:11:09.065              "opal": true
00:11:09.065            }
00:11:09.065          }
00:11:09.065        ],
00:11:09.065        "mp_policy": "active_passive"
00:11:09.065      }
00:11:09.065    },
00:11:09.065    {
00:11:09.065      "name": "Nvme2n1",
00:11:09.065      "aliases": [
00:11:09.065        "e3e6f570-f67d-4e01-9319-3a62f8a2d812"
00:11:09.065      ],
00:11:09.065      "product_name": "NVMe disk",
00:11:09.065      "block_size": 512,
00:11:09.065      "num_blocks": 732585168,
00:11:09.065      "uuid": "e3e6f570-f67d-4e01-9319-3a62f8a2d812",
00:11:09.065      "numa_id": 1,
00:11:09.065      "assigned_rate_limits": {
00:11:09.065        "rw_ios_per_sec": 0,
00:11:09.065        "rw_mbytes_per_sec": 0,
00:11:09.065        "r_mbytes_per_sec": 0,
00:11:09.065        "w_mbytes_per_sec": 0
00:11:09.065      },
00:11:09.065      "claimed": false,
00:11:09.065      "zoned": false,
00:11:09.065      "supported_io_types": {
00:11:09.065        "read": true,
00:11:09.065        "write": true,
00:11:09.065        "unmap": true,
00:11:09.065        "flush": true,
00:11:09.065        "reset": true,
00:11:09.065        "nvme_admin": true,
00:11:09.065        "nvme_io": true,
00:11:09.065        "nvme_io_md": false,
00:11:09.065        "write_zeroes": true,
00:11:09.065        "zcopy": false,
00:11:09.065        "get_zone_info": false,
00:11:09.065        "zone_management": false,
00:11:09.065        "zone_append": false,
00:11:09.065        "compare": false,
00:11:09.065        "compare_and_write": false,
00:11:09.065        "abort": true,
00:11:09.065        "seek_hole": false,
00:11:09.065        "seek_data": false,
00:11:09.065        "copy": false,
00:11:09.065        "nvme_iov_md": false
00:11:09.065      },
00:11:09.065      "driver_specific": {
00:11:09.065        "nvme": [
00:11:09.065          {
00:11:09.065            "pci_address": "0000:b0:00.0",
00:11:09.065            "trid": {
00:11:09.065              "trtype": "PCIe",
00:11:09.065              "traddr": "0000:b0:00.0"
00:11:09.065            },
00:11:09.065            "ctrlr_data": {
00:11:09.065              "cntlid": 0,
00:11:09.065              "vendor_id": "0x8086",
00:11:09.065              "model_number": "INTEL SSDPED1K375GA",
00:11:09.065              "serial_number": "PHKS7482004A375AGN",
00:11:09.065              "firmware_revision": "E2010600",
00:11:09.065              "oacs": {
00:11:09.065                "security": 1,
00:11:09.065                "format": 1,
00:11:09.065                "firmware": 1,
00:11:09.065                "ns_manage": 0
00:11:09.065              },
00:11:09.065              "multi_ctrlr": false,
00:11:09.065              "ana_reporting": false
00:11:09.065            },
00:11:09.065            "vs": {
00:11:09.065              "nvme_version": "1.0"
00:11:09.065            },
00:11:09.065            "ns_data": {
00:11:09.065              "id": 1,
00:11:09.065              "can_share": false
00:11:09.065            },
00:11:09.065            "security": {
00:11:09.065              "opal": true
00:11:09.065            }
00:11:09.065          }
00:11:09.065        ],
00:11:09.065        "mp_policy": "active_passive"
00:11:09.065      }
00:11:09.065    },
00:11:09.065    {
00:11:09.065      "name": "6e03160c-5372-407f-a809-79382c3da070",
00:11:09.065      "aliases": [
00:11:09.065        "lvs_0/lbd_vm_0"
00:11:09.065      ],
00:11:09.065      "product_name": "Logical Volume",
00:11:09.065      "block_size": 512,
00:11:09.065      "num_blocks": 3747078144,
00:11:09.065      "uuid": "6e03160c-5372-407f-a809-79382c3da070",
00:11:09.065      "assigned_rate_limits": {
00:11:09.065        "rw_ios_per_sec": 0,
00:11:09.065        "rw_mbytes_per_sec": 0,
00:11:09.065        "r_mbytes_per_sec": 0,
00:11:09.065        "w_mbytes_per_sec": 0
00:11:09.065      },
00:11:09.065      "claimed": false,
00:11:09.065      "zoned": false,
00:11:09.065      "supported_io_types": {
00:11:09.065        "read": true,
00:11:09.065        "write": true,
00:11:09.065        "unmap": true,
00:11:09.065        "flush": false,
00:11:09.065        "reset": true,
00:11:09.065        "nvme_admin": false,
00:11:09.065        "nvme_io": false,
00:11:09.065        "nvme_io_md": false,
00:11:09.065        "write_zeroes": true,
00:11:09.065        "zcopy": false,
00:11:09.065        "get_zone_info": false,
00:11:09.065        "zone_management": false,
00:11:09.065        "zone_append": false,
00:11:09.065        "compare": false,
00:11:09.065        "compare_and_write": false,
00:11:09.065        "abort": false,
00:11:09.065        "seek_hole": true,
00:11:09.065        "seek_data": true,
00:11:09.065        "copy": false,
00:11:09.065        "nvme_iov_md": false
00:11:09.065      },
00:11:09.065      "driver_specific": {
00:11:09.065        "lvol": {
00:11:09.065          "lvol_store_uuid": "48a3e668-55aa-4c49-846a-57de4b525423",
00:11:09.065          "base_bdev": "Nvme0n1",
00:11:09.065          "thin_provision": true,
00:11:09.065          "num_allocated_clusters": 0,
00:11:09.065          "snapshot": false,
00:11:09.065          "clone": false,
00:11:09.065          "esnap_clone": false
00:11:09.065        }
00:11:09.065      }
00:11:09.065    }
00:11:09.065  ]'
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@111 -- # notice 'Configuration after initial set-up:'
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Configuration after initial set-up:'
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:09.065   10:38:58 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Configuration after initial set-up:'
00:11:09.065  INFO: Configuration after initial set-up:
00:11:09.066   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@112 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_get_lvstores
00:11:09.066  [
00:11:09.066    {
00:11:09.066      "uuid": "48a3e668-55aa-4c49-846a-57de4b525423",
00:11:09.066      "name": "lvs_0",
00:11:09.066      "base_bdev": "Nvme0n1",
00:11:09.066      "total_data_clusters": 457407,
00:11:09.066      "free_clusters": 457407,
00:11:09.066      "block_size": 512,
00:11:09.066      "cluster_size": 4194304
00:11:09.066    }
00:11:09.066  ]
00:11:09.066   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@113 -- # echo '[
00:11:09.066    {
00:11:09.066      "name": "Nvme0n1",
00:11:09.066      "aliases": [
00:11:09.066        "36344730-5260-5497-0025-38450000011d"
00:11:09.066      ],
00:11:09.066      "product_name": "NVMe disk",
00:11:09.066      "block_size": 512,
00:11:09.066      "num_blocks": 3750748848,
00:11:09.066      "uuid": "36344730-5260-5497-0025-38450000011d",
00:11:09.066      "numa_id": 0,
00:11:09.066      "assigned_rate_limits": {
00:11:09.066        "rw_ios_per_sec": 0,
00:11:09.066        "rw_mbytes_per_sec": 0,
00:11:09.066        "r_mbytes_per_sec": 0,
00:11:09.066        "w_mbytes_per_sec": 0
00:11:09.066      },
00:11:09.066      "claimed": true,
00:11:09.066      "claim_type": "read_many_write_one",
00:11:09.066      "zoned": false,
00:11:09.066      "supported_io_types": {
00:11:09.066        "read": true,
00:11:09.066        "write": true,
00:11:09.066        "unmap": true,
00:11:09.066        "flush": true,
00:11:09.066        "reset": true,
00:11:09.066        "nvme_admin": true,
00:11:09.066        "nvme_io": true,
00:11:09.066        "nvme_io_md": false,
00:11:09.066        "write_zeroes": true,
00:11:09.066        "zcopy": false,
00:11:09.066        "get_zone_info": false,
00:11:09.066        "zone_management": false,
00:11:09.066        "zone_append": false,
00:11:09.066        "compare": true,
00:11:09.066        "compare_and_write": false,
00:11:09.066        "abort": true,
00:11:09.066        "seek_hole": false,
00:11:09.066        "seek_data": false,
00:11:09.066        "copy": false,
00:11:09.066        "nvme_iov_md": false
00:11:09.066      },
00:11:09.066      "driver_specific": {
00:11:09.066        "nvme": [
00:11:09.066          {
00:11:09.066            "pci_address": "0000:5e:00.0",
00:11:09.066            "trid": {
00:11:09.066              "trtype": "PCIe",
00:11:09.066              "traddr": "0000:5e:00.0"
00:11:09.066            },
00:11:09.066            "ctrlr_data": {
00:11:09.066              "cntlid": 6,
00:11:09.066              "vendor_id": "0x144d",
00:11:09.066              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:11:09.066              "serial_number": "S64GNE0R605497",
00:11:09.066              "firmware_revision": "GDC5302Q",
00:11:09.066              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:11:09.066              "oacs": {
00:11:09.066                "security": 1,
00:11:09.066                "format": 1,
00:11:09.066                "firmware": 1,
00:11:09.066                "ns_manage": 1
00:11:09.066              },
00:11:09.066              "multi_ctrlr": false,
00:11:09.066              "ana_reporting": false
00:11:09.066            },
00:11:09.066            "vs": {
00:11:09.066              "nvme_version": "1.4"
00:11:09.066            },
00:11:09.066            "ns_data": {
00:11:09.066              "id": 1,
00:11:09.066              "can_share": false
00:11:09.066            },
00:11:09.066            "security": {
00:11:09.066              "opal": true
00:11:09.066            }
00:11:09.066          }
00:11:09.066        ],
00:11:09.066        "mp_policy": "active_passive"
00:11:09.066      }
00:11:09.066    },
00:11:09.066    {
00:11:09.066      "name": "Nvme1n1",
00:11:09.066      "aliases": [
00:11:09.066        "bf9e8a9c-07a7-4245-83d1-2e91afd7063e"
00:11:09.066      ],
00:11:09.066      "product_name": "NVMe disk",
00:11:09.066      "block_size": 512,
00:11:09.066      "num_blocks": 732585168,
00:11:09.066      "uuid": "bf9e8a9c-07a7-4245-83d1-2e91afd7063e",
00:11:09.066      "numa_id": 1,
00:11:09.066      "assigned_rate_limits": {
00:11:09.066        "rw_ios_per_sec": 0,
00:11:09.066        "rw_mbytes_per_sec": 0,
00:11:09.066        "r_mbytes_per_sec": 0,
00:11:09.066        "w_mbytes_per_sec": 0
00:11:09.066      },
00:11:09.066      "claimed": false,
00:11:09.066      "zoned": false,
00:11:09.066      "supported_io_types": {
00:11:09.066        "read": true,
00:11:09.066        "write": true,
00:11:09.066        "unmap": true,
00:11:09.066        "flush": true,
00:11:09.066        "reset": true,
00:11:09.066        "nvme_admin": true,
00:11:09.066        "nvme_io": true,
00:11:09.066        "nvme_io_md": false,
00:11:09.066        "write_zeroes": true,
00:11:09.066        "zcopy": false,
00:11:09.066        "get_zone_info": false,
00:11:09.066        "zone_management": false,
00:11:09.066        "zone_append": false,
00:11:09.066        "compare": false,
00:11:09.066        "compare_and_write": false,
00:11:09.066        "abort": true,
00:11:09.066        "seek_hole": false,
00:11:09.066        "seek_data": false,
00:11:09.066        "copy": false,
00:11:09.066        "nvme_iov_md": false
00:11:09.066      },
00:11:09.066      "driver_specific": {
00:11:09.066        "nvme": [
00:11:09.066          {
00:11:09.066            "pci_address": "0000:af:00.0",
00:11:09.066            "trid": {
00:11:09.066              "trtype": "PCIe",
00:11:09.066              "traddr": "0000:af:00.0"
00:11:09.066            },
00:11:09.066            "ctrlr_data": {
00:11:09.066              "cntlid": 0,
00:11:09.066              "vendor_id": "0x8086",
00:11:09.066              "model_number": "INTEL SSDPED1K375GA",
00:11:09.066              "serial_number": "PHKS7481000F375AGN",
00:11:09.066              "firmware_revision": "E2010600",
00:11:09.066              "oacs": {
00:11:09.066                "security": 1,
00:11:09.066                "format": 1,
00:11:09.066                "firmware": 1,
00:11:09.066                "ns_manage": 0
00:11:09.066              },
00:11:09.066              "multi_ctrlr": false,
00:11:09.066              "ana_reporting": false
00:11:09.066            },
00:11:09.066            "vs": {
00:11:09.066              "nvme_version": "1.0"
00:11:09.066            },
00:11:09.066            "ns_data": {
00:11:09.066              "id": 1,
00:11:09.066              "can_share": false
00:11:09.066            },
00:11:09.066            "security": {
00:11:09.066              "opal": true
00:11:09.066            }
00:11:09.066          }
00:11:09.066        ],
00:11:09.066        "mp_policy": "active_passive"
00:11:09.066      }
00:11:09.066    },
00:11:09.066    {
00:11:09.066      "name": "Nvme2n1",
00:11:09.066      "aliases": [
00:11:09.066        "e3e6f570-f67d-4e01-9319-3a62f8a2d812"
00:11:09.066      ],
00:11:09.066      "product_name": "NVMe disk",
00:11:09.066      "block_size": 512,
00:11:09.066      "num_blocks": 732585168,
00:11:09.066      "uuid": "e3e6f570-f67d-4e01-9319-3a62f8a2d812",
00:11:09.066      "numa_id": 1,
00:11:09.066      "assigned_rate_limits": {
00:11:09.066        "rw_ios_per_sec": 0,
00:11:09.066        "rw_mbytes_per_sec": 0,
00:11:09.066        "r_mbytes_per_sec": 0,
00:11:09.066        "w_mbytes_per_sec": 0
00:11:09.066      },
00:11:09.066      "claimed": false,
00:11:09.066      "zoned": false,
00:11:09.066      "supported_io_types": {
00:11:09.066        "read": true,
00:11:09.066        "write": true,
00:11:09.066        "unmap": true,
00:11:09.066        "flush": true,
00:11:09.066        "reset": true,
00:11:09.066        "nvme_admin": true,
00:11:09.066        "nvme_io": true,
00:11:09.066        "nvme_io_md": false,
00:11:09.066        "write_zeroes": true,
00:11:09.066        "zcopy": false,
00:11:09.066        "get_zone_info": false,
00:11:09.066        "zone_management": false,
00:11:09.066        "zone_append": false,
00:11:09.066        "compare": false,
00:11:09.066        "compare_and_write": false,
00:11:09.066        "abort": true,
00:11:09.066        "seek_hole": false,
00:11:09.066        "seek_data": false,
00:11:09.066        "copy": false,
00:11:09.066        "nvme_iov_md": false
00:11:09.066      },
00:11:09.066      "driver_specific": {
00:11:09.066        "nvme": [
00:11:09.066          {
00:11:09.066            "pci_address": "0000:b0:00.0",
00:11:09.066            "trid": {
00:11:09.066              "trtype": "PCIe",
00:11:09.066              "traddr": "0000:b0:00.0"
00:11:09.066            },
00:11:09.066            "ctrlr_data": {
00:11:09.066              "cntlid": 0,
00:11:09.066              "vendor_id": "0x8086",
00:11:09.066              "model_number": "INTEL SSDPED1K375GA",
00:11:09.066              "serial_number": "PHKS7482004A375AGN",
00:11:09.066              "firmware_revision": "E2010600",
00:11:09.066              "oacs": {
00:11:09.066                "security": 1,
00:11:09.066                "format": 1,
00:11:09.066                "firmware": 1,
00:11:09.066                "ns_manage": 0
00:11:09.066              },
00:11:09.066              "multi_ctrlr": false,
00:11:09.066              "ana_reporting": false
00:11:09.066            },
00:11:09.066            "vs": {
00:11:09.066              "nvme_version": "1.0"
00:11:09.066            },
00:11:09.066            "ns_data": {
00:11:09.066              "id": 1,
00:11:09.066              "can_share": false
00:11:09.067            },
00:11:09.067            "security": {
00:11:09.067              "opal": true
00:11:09.067            }
00:11:09.067          }
00:11:09.067        ],
00:11:09.067        "mp_policy": "active_passive"
00:11:09.067      }
00:11:09.067    },
00:11:09.067    {
00:11:09.067      "name": "6e03160c-5372-407f-a809-79382c3da070",
00:11:09.067      "aliases": [
00:11:09.067        "lvs_0/lbd_vm_0"
00:11:09.067      ],
00:11:09.067      "product_name": "Logical Volume",
00:11:09.067      "block_size": 512,
00:11:09.067      "num_blocks": 3747078144,
00:11:09.067      "uuid": "6e03160c-5372-407f-a809-79382c3da070",
00:11:09.067      "assigned_rate_limits": {
00:11:09.067        "rw_ios_per_sec": 0,
00:11:09.067        "rw_mbytes_per_sec": 0,
00:11:09.067        "r_mbytes_per_sec": 0,
00:11:09.067        "w_mbytes_per_sec": 0
00:11:09.067      },
00:11:09.067      "claimed": false,
00:11:09.067      "zoned": false,
00:11:09.067      "supported_io_types": {
00:11:09.067        "read": true,
00:11:09.067        "write": true,
00:11:09.067        "unmap": true,
00:11:09.067        "flush": false,
00:11:09.067        "reset": true,
00:11:09.067        "nvme_admin": false,
00:11:09.067        "nvme_io": false,
00:11:09.067        "nvme_io_md": false,
00:11:09.067        "write_zeroes": true,
00:11:09.067        "zcopy": false,
00:11:09.067        "get_zone_info": false,
00:11:09.067        "zone_management": false,
00:11:09.067        "zone_append": false,
00:11:09.067        "compare": false,
00:11:09.067        "compare_and_write": false,
00:11:09.067        "abort": false,
00:11:09.067        "seek_hole": true,
00:11:09.067        "seek_data": true,
00:11:09.067        "copy": false,
00:11:09.067        "nvme_iov_md": false
00:11:09.067      },
00:11:09.067      "driver_specific": {
00:11:09.067        "lvol": {
00:11:09.067          "lvol_store_uuid": "48a3e668-55aa-4c49-846a-57de4b525423",
00:11:09.067          "base_bdev": "Nvme0n1",
00:11:09.067          "thin_provision": true,
00:11:09.067          "num_allocated_clusters": 0,
00:11:09.067          "snapshot": false,
00:11:09.067          "clone": false,
00:11:09.067          "esnap_clone": false
00:11:09.067        }
00:11:09.067      }
00:11:09.067    }
00:11:09.067  ]'
00:11:09.067  [
00:11:09.067    {
00:11:09.067      "name": "Nvme0n1",
00:11:09.067      "aliases": [
00:11:09.067        "36344730-5260-5497-0025-38450000011d"
00:11:09.067      ],
00:11:09.067      "product_name": "NVMe disk",
00:11:09.067      "block_size": 512,
00:11:09.067      "num_blocks": 3750748848,
00:11:09.067      "uuid": "36344730-5260-5497-0025-38450000011d",
00:11:09.067      "numa_id": 0,
00:11:09.067      "assigned_rate_limits": {
00:11:09.067        "rw_ios_per_sec": 0,
00:11:09.067        "rw_mbytes_per_sec": 0,
00:11:09.067        "r_mbytes_per_sec": 0,
00:11:09.067        "w_mbytes_per_sec": 0
00:11:09.067      },
00:11:09.067      "claimed": true,
00:11:09.067      "claim_type": "read_many_write_one",
00:11:09.067      "zoned": false,
00:11:09.067      "supported_io_types": {
00:11:09.067        "read": true,
00:11:09.067        "write": true,
00:11:09.067        "unmap": true,
00:11:09.067        "flush": true,
00:11:09.067        "reset": true,
00:11:09.067        "nvme_admin": true,
00:11:09.067        "nvme_io": true,
00:11:09.067        "nvme_io_md": false,
00:11:09.067        "write_zeroes": true,
00:11:09.067        "zcopy": false,
00:11:09.067        "get_zone_info": false,
00:11:09.067        "zone_management": false,
00:11:09.067        "zone_append": false,
00:11:09.067        "compare": true,
00:11:09.067        "compare_and_write": false,
00:11:09.067        "abort": true,
00:11:09.067        "seek_hole": false,
00:11:09.067        "seek_data": false,
00:11:09.067        "copy": false,
00:11:09.067        "nvme_iov_md": false
00:11:09.067      },
00:11:09.067      "driver_specific": {
00:11:09.067        "nvme": [
00:11:09.067          {
00:11:09.067            "pci_address": "0000:5e:00.0",
00:11:09.067            "trid": {
00:11:09.067              "trtype": "PCIe",
00:11:09.067              "traddr": "0000:5e:00.0"
00:11:09.067            },
00:11:09.067            "ctrlr_data": {
00:11:09.067              "cntlid": 6,
00:11:09.067              "vendor_id": "0x144d",
00:11:09.067              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:11:09.067              "serial_number": "S64GNE0R605497",
00:11:09.067              "firmware_revision": "GDC5302Q",
00:11:09.067              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:11:09.067              "oacs": {
00:11:09.067                "security": 1,
00:11:09.067                "format": 1,
00:11:09.067                "firmware": 1,
00:11:09.067                "ns_manage": 1
00:11:09.067              },
00:11:09.067              "multi_ctrlr": false,
00:11:09.067              "ana_reporting": false
00:11:09.067            },
00:11:09.067            "vs": {
00:11:09.067              "nvme_version": "1.4"
00:11:09.067            },
00:11:09.067            "ns_data": {
00:11:09.067              "id": 1,
00:11:09.067              "can_share": false
00:11:09.067            },
00:11:09.067            "security": {
00:11:09.067              "opal": true
00:11:09.067            }
00:11:09.067          }
00:11:09.067        ],
00:11:09.067        "mp_policy": "active_passive"
00:11:09.067      }
00:11:09.067    },
00:11:09.067    {
00:11:09.067      "name": "Nvme1n1",
00:11:09.067      "aliases": [
00:11:09.067        "bf9e8a9c-07a7-4245-83d1-2e91afd7063e"
00:11:09.067      ],
00:11:09.067      "product_name": "NVMe disk",
00:11:09.067      "block_size": 512,
00:11:09.067      "num_blocks": 732585168,
00:11:09.067      "uuid": "bf9e8a9c-07a7-4245-83d1-2e91afd7063e",
00:11:09.067      "numa_id": 1,
00:11:09.067      "assigned_rate_limits": {
00:11:09.067        "rw_ios_per_sec": 0,
00:11:09.067        "rw_mbytes_per_sec": 0,
00:11:09.067        "r_mbytes_per_sec": 0,
00:11:09.067        "w_mbytes_per_sec": 0
00:11:09.067      },
00:11:09.067      "claimed": false,
00:11:09.067      "zoned": false,
00:11:09.067      "supported_io_types": {
00:11:09.067        "read": true,
00:11:09.067        "write": true,
00:11:09.067        "unmap": true,
00:11:09.067        "flush": true,
00:11:09.067        "reset": true,
00:11:09.067        "nvme_admin": true,
00:11:09.067        "nvme_io": true,
00:11:09.067        "nvme_io_md": false,
00:11:09.067        "write_zeroes": true,
00:11:09.067        "zcopy": false,
00:11:09.067        "get_zone_info": false,
00:11:09.067        "zone_management": false,
00:11:09.067        "zone_append": false,
00:11:09.067        "compare": false,
00:11:09.067        "compare_and_write": false,
00:11:09.067        "abort": true,
00:11:09.067        "seek_hole": false,
00:11:09.067        "seek_data": false,
00:11:09.067        "copy": false,
00:11:09.067        "nvme_iov_md": false
00:11:09.067      },
00:11:09.067      "driver_specific": {
00:11:09.067        "nvme": [
00:11:09.067          {
00:11:09.067            "pci_address": "0000:af:00.0",
00:11:09.067            "trid": {
00:11:09.067              "trtype": "PCIe",
00:11:09.067              "traddr": "0000:af:00.0"
00:11:09.067            },
00:11:09.067            "ctrlr_data": {
00:11:09.067              "cntlid": 0,
00:11:09.067              "vendor_id": "0x8086",
00:11:09.067              "model_number": "INTEL SSDPED1K375GA",
00:11:09.067              "serial_number": "PHKS7481000F375AGN",
00:11:09.067              "firmware_revision": "E2010600",
00:11:09.067              "oacs": {
00:11:09.067                "security": 1,
00:11:09.067                "format": 1,
00:11:09.067                "firmware": 1,
00:11:09.067                "ns_manage": 0
00:11:09.067              },
00:11:09.067              "multi_ctrlr": false,
00:11:09.067              "ana_reporting": false
00:11:09.067            },
00:11:09.067            "vs": {
00:11:09.067              "nvme_version": "1.0"
00:11:09.067            },
00:11:09.067            "ns_data": {
00:11:09.067              "id": 1,
00:11:09.067              "can_share": false
00:11:09.067            },
00:11:09.067            "security": {
00:11:09.067              "opal": true
00:11:09.067            }
00:11:09.067          }
00:11:09.067        ],
00:11:09.067        "mp_policy": "active_passive"
00:11:09.067      }
00:11:09.067    },
00:11:09.067    {
00:11:09.067      "name": "Nvme2n1",
00:11:09.067      "aliases": [
00:11:09.067        "e3e6f570-f67d-4e01-9319-3a62f8a2d812"
00:11:09.067      ],
00:11:09.067      "product_name": "NVMe disk",
00:11:09.067      "block_size": 512,
00:11:09.067      "num_blocks": 732585168,
00:11:09.067      "uuid": "e3e6f570-f67d-4e01-9319-3a62f8a2d812",
00:11:09.067      "numa_id": 1,
00:11:09.067      "assigned_rate_limits": {
00:11:09.067        "rw_ios_per_sec": 0,
00:11:09.067        "rw_mbytes_per_sec": 0,
00:11:09.067        "r_mbytes_per_sec": 0,
00:11:09.067        "w_mbytes_per_sec": 0
00:11:09.067      },
00:11:09.067      "claimed": false,
00:11:09.067      "zoned": false,
00:11:09.067      "supported_io_types": {
00:11:09.067        "read": true,
00:11:09.067        "write": true,
00:11:09.067        "unmap": true,
00:11:09.067        "flush": true,
00:11:09.067        "reset": true,
00:11:09.067        "nvme_admin": true,
00:11:09.067        "nvme_io": true,
00:11:09.067        "nvme_io_md": false,
00:11:09.067        "write_zeroes": true,
00:11:09.067        "zcopy": false,
00:11:09.068        "get_zone_info": false,
00:11:09.068        "zone_management": false,
00:11:09.068        "zone_append": false,
00:11:09.068        "compare": false,
00:11:09.068        "compare_and_write": false,
00:11:09.068        "abort": true,
00:11:09.068        "seek_hole": false,
00:11:09.068        "seek_data": false,
00:11:09.068        "copy": false,
00:11:09.068        "nvme_iov_md": false
00:11:09.068      },
00:11:09.068      "driver_specific": {
00:11:09.068        "nvme": [
00:11:09.068          {
00:11:09.068            "pci_address": "0000:b0:00.0",
00:11:09.068            "trid": {
00:11:09.068              "trtype": "PCIe",
00:11:09.068              "traddr": "0000:b0:00.0"
00:11:09.068            },
00:11:09.068            "ctrlr_data": {
00:11:09.068              "cntlid": 0,
00:11:09.068              "vendor_id": "0x8086",
00:11:09.068              "model_number": "INTEL SSDPED1K375GA",
00:11:09.068              "serial_number": "PHKS7482004A375AGN",
00:11:09.068              "firmware_revision": "E2010600",
00:11:09.068              "oacs": {
00:11:09.068                "security": 1,
00:11:09.068                "format": 1,
00:11:09.068                "firmware": 1,
00:11:09.068                "ns_manage": 0
00:11:09.068              },
00:11:09.068              "multi_ctrlr": false,
00:11:09.068              "ana_reporting": false
00:11:09.068            },
00:11:09.068            "vs": {
00:11:09.068              "nvme_version": "1.0"
00:11:09.068            },
00:11:09.068            "ns_data": {
00:11:09.068              "id": 1,
00:11:09.068              "can_share": false
00:11:09.068            },
00:11:09.068            "security": {
00:11:09.068              "opal": true
00:11:09.068            }
00:11:09.068          }
00:11:09.068        ],
00:11:09.068        "mp_policy": "active_passive"
00:11:09.068      }
00:11:09.068    },
00:11:09.068    {
00:11:09.068      "name": "6e03160c-5372-407f-a809-79382c3da070",
00:11:09.068      "aliases": [
00:11:09.068        "lvs_0/lbd_vm_0"
00:11:09.068      ],
00:11:09.068      "product_name": "Logical Volume",
00:11:09.068      "block_size": 512,
00:11:09.068      "num_blocks": 3747078144,
00:11:09.068      "uuid": "6e03160c-5372-407f-a809-79382c3da070",
00:11:09.068      "assigned_rate_limits": {
00:11:09.068        "rw_ios_per_sec": 0,
00:11:09.068        "rw_mbytes_per_sec": 0,
00:11:09.068        "r_mbytes_per_sec": 0,
00:11:09.068        "w_mbytes_per_sec": 0
00:11:09.068      },
00:11:09.068      "claimed": false,
00:11:09.068      "zoned": false,
00:11:09.068      "supported_io_types": {
00:11:09.068        "read": true,
00:11:09.068        "write": true,
00:11:09.068        "unmap": true,
00:11:09.068        "flush": false,
00:11:09.068        "reset": true,
00:11:09.068        "nvme_admin": false,
00:11:09.068        "nvme_io": false,
00:11:09.068        "nvme_io_md": false,
00:11:09.068        "write_zeroes": true,
00:11:09.068        "zcopy": false,
00:11:09.068        "get_zone_info": false,
00:11:09.068        "zone_management": false,
00:11:09.068        "zone_append": false,
00:11:09.068        "compare": false,
00:11:09.068        "compare_and_write": false,
00:11:09.068        "abort": false,
00:11:09.068        "seek_hole": true,
00:11:09.068        "seek_data": true,
00:11:09.068        "copy": false,
00:11:09.068        "nvme_iov_md": false
00:11:09.068      },
00:11:09.068      "driver_specific": {
00:11:09.068        "lvol": {
00:11:09.068          "lvol_store_uuid": "48a3e668-55aa-4c49-846a-57de4b525423",
00:11:09.068          "base_bdev": "Nvme0n1",
00:11:09.068          "thin_provision": true,
00:11:09.068          "num_allocated_clusters": 0,
00:11:09.068          "snapshot": false,
00:11:09.068          "clone": false,
00:11:09.068          "esnap_clone": false
00:11:09.068        }
00:11:09.068      }
00:11:09.068    }
00:11:09.068  ]
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i = 0 ))
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i < vm_count ))
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@117 -- # vm=vm_0
00:11:09.068    10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@121 -- # jq -r 'map(select(.aliases[] | contains("vm_0")) |             .aliases[]) | join(" ")'
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@121 -- # bdevs=lvs_0/lbd_vm_0
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@122 -- # bdevs=($bdevs)
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@124 -- # setup_cmd='vm_setup --disk-type=spdk_vhost_scsi --force=0'
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@125 -- # setup_cmd+=' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2'
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@128 -- # mask_arg=("--cpumask" "$spdk_mask")
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@130 -- # [[ spdk_vhost_scsi == \s\p\d\k\_\v\h\o\s\t\_\s\c\s\i ]]
00:11:09.068   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@131 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_scsi_controller naa.0.0 --cpumask '[2,3,4,5]'
00:11:09.327  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vhost-user server: socket created, fd: 343
00:11:09.327  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) binding succeeded
00:11:09.327   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@132 -- # (( j = 0 ))
00:11:09.327   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@132 -- # (( j < 1 ))
00:11:09.327   10:38:58 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@133 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_add_target naa.0.0 0 lvs_0/lbd_vm_0
00:11:09.593  0
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@132 -- # (( j++ ))
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@132 -- # (( j < 1 ))
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@135 -- # setup_cmd+=' --disks=0'
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@146 -- # vm_setup --disk-type=spdk_vhost_scsi --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=0
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@518 -- # xtrace_disable
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:09.593  INFO: Creating new VM in /root/vhost_test/vms/0
00:11:09.593  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:09.593  INFO: TASK MASK: 0,1
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@671 -- # local node_num=0
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:09.593   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:09.593  INFO: NUMA NODE: 0
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@701 -- # IFS=,
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@702 -- # disk_type=spdk_vhost_scsi
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@704 -- # case $disk_type in
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@723 -- # notice 'using socket /root/vhost_test/vhost/0/naa.0.0'
00:11:09.594   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vhost/0/naa.0.0'
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vhost/0/naa.0.0'
00:11:09.595  INFO: using socket /root/vhost_test/vhost/0/naa.0.0
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@724 -- # cmd+=(-chardev "socket,id=char_$disk,path=$vhost_dir/naa.$disk.$vm_num")
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@725 -- # cmd+=(-device "vhost-user-scsi-pci,id=scsi_$disk,num_queues=$queue_number,chardev=char_$disk")
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@726 -- # [[ 0 == '' ]]
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@785 -- # (( 0 ))
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:11:09.595  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:11:09.595   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@787 -- # cat
00:11:09.596    10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 0,1 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -chardev socket,id=char_0,path=/root/vhost_test/vhost/0/naa.0.0 -device vhost-user-scsi-pci,id=scsi_0,num_queues=2,chardev=char_0
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@827 -- # echo 10000
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@828 -- # echo 10001
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@829 -- # echo 10002
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@834 -- # echo 10004
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@835 -- # echo 100
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@147 -- # used_vms+=' 0'
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i++ ))
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i < vm_count ))
00:11:09.596   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@150 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_get_controllers
00:11:09.865  [
00:11:09.865    {
00:11:09.865      "ctrlr": "naa.0.0",
00:11:09.865      "cpumask": "0x3c",
00:11:09.865      "delay_base_us": 0,
00:11:09.865      "iops_threshold": 60000,
00:11:09.865      "socket": "/root/vhost_test/vhost/0/naa.0.0",
00:11:09.865      "sessions": [],
00:11:09.865      "backend_specific": {
00:11:09.865        "scsi": [
00:11:09.865          {
00:11:09.865            "scsi_dev_num": 0,
00:11:09.865            "id": 0,
00:11:09.865            "target_name": "Target 0",
00:11:09.865            "luns": [
00:11:09.865              {
00:11:09.865                "id": 0,
00:11:09.865                "bdev_name": "6e03160c-5372-407f-a809-79382c3da070"
00:11:09.865              }
00:11:09.865            ]
00:11:09.865          }
00:11:09.865        ]
00:11:09.865      }
00:11:09.865    }
00:11:09.865  ]
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@153 -- # vm_run 0
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@843 -- # local run_all=false
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@844 -- # local vms_to_run=
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@846 -- # getopts a-: optchar
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@856 -- # false
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@859 -- # shift 0
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@860 -- # for vm in "$@"
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@871 -- # vm_is_running 0
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:09.865   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@373 -- # return 1
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:11:09.866  INFO: running /root/vhost_test/vms/0/run.sh
00:11:09.866   10:38:59 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:11:09.866  Running VM in /root/vhost_test/vms/0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new vhost user connection is 76
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device, handle is 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_FEATURES
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Vhost-user protocol features: 0x11cbf
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_QUEUE_NUM
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_OWNER
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_FEATURES
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:0 file:347
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ERR
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:1 file:348
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ERR
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:2 file:349
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ERR
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:3 file:350
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ERR
00:11:10.434  Waiting for QEMU pid file
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_INFLIGHT_FD
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd num_queues: 4
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd queue_size: 128
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_size: 8448
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_offset: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight fd: 351
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_INFLIGHT_FD
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_size: 8448
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_offset: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd num_queues: 4
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd queue_size: 128
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd fd: 352
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd pervq_inflight_size: 2112
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_FEATURES
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Virtio features: 0x140000000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x00000008):
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 1
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_MEM_TABLE
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) guest memory region size: 0x40000000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 guest physical addr: 0x0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 guest virtual  addr: 0x7efed3e00000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 host  virtual  addr: 0x7f3965200000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap addr : 0x7f3965200000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap size : 0x40000000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap align: 0x200000
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap off  : 0x0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:2 last_used_idx:0 last_avail_idx:0.
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:2 file:353
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 0
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 1
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 2
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:10.434  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 3
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:0 file:355
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:1 file:347
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:2 file:348
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:10.435  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:3 file:349
00:11:11.373  === qemu.log ===
00:11:11.373  === qemu.log ===
00:11:11.373   10:39:00 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@154 -- # vm_wait_for_boot 300 0
00:11:11.373   10:39:00 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@913 -- # assert_number 300
00:11:11.373   10:39:00 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:11:11.373   10:39:00 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@281 -- # return 0
00:11:11.373   10:39:00 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@915 -- # xtrace_disable
00:11:11.373   10:39:00 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:11.373  INFO: Waiting for VMs to boot
00:11:11.373  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 0
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 1
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 2
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 3
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:11:21.355  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:2 file:260
00:11:21.924  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_INFLIGHT_FD
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd num_queues: 4
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd queue_size: 128
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_size: 8448
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_offset: 0
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight fd: 348
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_INFLIGHT_FD
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_size: 8448
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_offset: 0
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd num_queues: 4
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd queue_size: 128
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd fd: 350
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd pervq_inflight_size: 2112
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_FEATURES
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Virtio features: 0x150000006
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_MEM_TABLE
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) memory regions not changed
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:0 file:348
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:1 file:352
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:2 last_used_idx:0 last_avail_idx:0.
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:2 file:353
00:11:21.925  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:3 last_used_idx:0 last_avail_idx:0.
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:3 file:356
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 0
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 1
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 2
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 3
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:0 file:357
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:1 file:355
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:2 file:347
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:11:22.184  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:3 file:358
00:11:22.184  [2024-11-19 10:39:11.735586] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:32.228  
00:11:32.228  INFO: VM0 ready
00:11:32.229  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:32.229  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:32.229  INFO: all VMs ready
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@973 -- # return 0
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@158 -- # fio_disks=
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@159 -- # for vm_num in $used_vms
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@160 -- # qemu_mask_param=VM_0_qemu_mask
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@162 -- # host_name=VM-0-0,1
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@164 -- # host_name=VM-0-0-1
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@165 -- # vm_exec 0 'hostname VM-0-0-1'
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:32.229    10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:32.229    10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:32.229    10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:32.229    10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:32.229    10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:32.229    10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:32.229   10:39:21 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-0-1'
00:11:32.229  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@166 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@977 -- # local OPTIND optchar
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@978 -- # local readonly=
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@979 -- # local fio_bin=
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@981 -- # case "$optchar" in
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@983 -- # case "$OPTARG" in
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@993 -- # shift 1
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@994 -- # for vm_num in "$@"
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@995 -- # notice 'Starting fio server on VM0'
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0'
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0'
00:11:32.488  INFO: Starting fio server on VM0
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@997 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio'
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:32.488    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:32.488    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:32.488    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:32.488    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:32.488    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:32.488    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:32.488   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:11:32.488  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@998 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:32.747    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:32.747    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:32.747    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:32.747    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:32.747    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:32.747    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:32.747   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:32.747  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:33.006   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@168 -- # [[ spdk_vhost_scsi == \s\p\d\k\_\v\h\o\s\t\_\s\c\s\i ]]
00:11:33.006   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@169 -- # vm_check_scsi_location 0
00:11:33.006   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:11:33.006  	for entry in /sys/block/sd*; do
00:11:33.006  		disk_type="$(cat $entry/device/vendor)";
00:11:33.006  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:33.006  			fname=$(basename $entry);
00:11:33.006  			echo -n " $fname";
00:11:33.006  		fi;
00:11:33.006  	done'
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:11:33.006  	for entry in /sys/block/sd*; do
00:11:33.006  		disk_type="$(cat $entry/device/vendor)";
00:11:33.006  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:33.006  			fname=$(basename $entry);
00:11:33.006  			echo -n " $fname";
00:11:33.006  		fi;
00:11:33.006  	done'
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1016 -- # vm_exec 0 bash -s
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:33.006     10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:33.006     10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:33.006     10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.006     10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.006     10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:33.006     10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:33.006    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 bash -s
00:11:33.006  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@174 -- # printf :/dev/%s sdb
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@174 -- # fio_disks+=' --vm=0:/dev/sdb'
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@177 -- # [[ 0 -eq 1 ]]
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@180 -- # job_file=default_integrity.job
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@183 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/sdb
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1053 -- # local arg
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1054 -- # local job_file=
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1055 -- # local fio_bin=
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1056 -- # vms=()
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1056 -- # local vms
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1057 -- # local out=
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1058 -- # local vm
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1059 -- # local run_server_mode=true
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1061 -- # local fio_start_cmd
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1067 -- # case "$arg" in
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1067 -- # case "$arg" in
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1067 -- # case "$arg" in
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1067 -- # case "$arg" in
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1108 -- # local job_fname
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1115 -- # local vm_num=0
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1116 -- # local vmdisks=/dev/sdb
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1119 -- # vm_exec 0 'cat > /root/default_integrity.job'
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:33.265    10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:33.265   10:39:22 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job'
00:11:33.265  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1121 -- # false
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1125 -- # vm_exec 0 cat /root/default_integrity.job
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:33.265   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:33.265    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:33.265    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:33.265    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.266    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.266    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:33.266    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:33.266   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job
00:11:33.525  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:33.525  [global]
00:11:33.525  blocksize_range=4k-512k
00:11:33.525  iodepth=512
00:11:33.525  iodepth_batch=128
00:11:33.525  iodepth_low=256
00:11:33.525  ioengine=libaio
00:11:33.525  size=1G
00:11:33.525  io_size=4G
00:11:33.525  filename=/dev/sdb
00:11:33.525  group_reporting
00:11:33.525  thread
00:11:33.525  numjobs=1
00:11:33.525  direct=1
00:11:33.525  rw=randwrite
00:11:33.525  do_verify=1
00:11:33.525  verify=md5
00:11:33.525  verify_backlog=1024
00:11:33.525  fsync_on_close=1
00:11:33.525  verify_state_save=0
00:11:33.525  [nvme-host]
00:11:33.525   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1127 -- # true
00:11:33.525    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1128 -- # vm_fio_socket 0
00:11:33.525    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@326 -- # vm_num_is_valid 0
00:11:33.525    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:33.525    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:33.525    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/0
00:11:33.525    10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/0/fio_socket
00:11:33.525   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job '
00:11:33.525   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1131 -- # true
00:11:33.525   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1147 -- # true
00:11:33.525   10:39:23 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job
00:11:34.903  [2024-11-19 10:39:24.287221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:39.123  [2024-11-19 10:39:28.137632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:39.123  [2024-11-19 10:39:28.146834] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:39.123  [2024-11-19 10:39:28.372316] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:42.412  [2024-11-19 10:39:31.948230] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:42.412  [2024-11-19 10:39:31.962258] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:42.412  [2024-11-19 10:39:32.173771] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:42.672   10:39:32 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1162 -- # sleep 1
00:11:43.608   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:11:43.608   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:11:43.608   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:11:43.608  hostname=VM-0-0-1, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:11:43.608  <VM-0-0-1> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:11:43.608  <VM-0-0-1> Starting 1 thread
00:11:43.608  <VM-0-0-1> 
00:11:43.608  nvme-host: (groupid=0, jobs=1): err= 0: pid=954: Tue Nov 19 10:39:32 2024
00:11:43.608    read: IOPS=1603, BW=269MiB/s (282MB/s)(2048MiB/7614msec)
00:11:43.608      slat (usec): min=66, max=12088, avg=1815.26, stdev=2286.70
00:11:43.608      clat (msec): min=2, max=282, avg=111.98, stdev=60.02
00:11:43.608       lat (msec): min=4, max=283, avg=113.80, stdev=59.96
00:11:43.608      clat percentiles (msec):
00:11:43.608       |  1.00th=[    9],  5.00th=[   17], 10.00th=[   35], 20.00th=[   63],
00:11:43.608       | 30.00th=[   75], 40.00th=[   92], 50.00th=[  107], 60.00th=[  123],
00:11:43.608       | 70.00th=[  142], 80.00th=[  165], 90.00th=[  197], 95.00th=[  220],
00:11:43.608       | 99.00th=[  262], 99.50th=[  271], 99.90th=[  279], 99.95th=[  284],
00:11:43.608       | 99.99th=[  284]
00:11:43.608    write: IOPS=1707, BW=287MiB/s (300MB/s)(2048MiB/7148msec); 0 zone resets
00:11:43.608      slat (usec): min=294, max=65792, avg=18048.21, stdev=12239.68
00:11:43.608      clat (msec): min=3, max=248, avg=97.39, stdev=56.64
00:11:43.608       lat (msec): min=3, max=274, avg=115.44, stdev=59.22
00:11:43.608      clat percentiles (msec):
00:11:43.608       |  1.00th=[    4],  5.00th=[    8], 10.00th=[   19], 20.00th=[   51],
00:11:43.608       | 30.00th=[   66], 40.00th=[   78], 50.00th=[   95], 60.00th=[  109],
00:11:43.608       | 70.00th=[  125], 80.00th=[  144], 90.00th=[  174], 95.00th=[  199],
00:11:43.608       | 99.00th=[  232], 99.50th=[  249], 99.90th=[  249], 99.95th=[  249],
00:11:43.608       | 99.99th=[  249]
00:11:43.608     bw (  KiB/s): min=169141, max=472048, per=100.00%, avg=299568.93, stdev=103810.60, samples=14
00:11:43.608     iops        : min= 1021, max= 2048, avg=1743.79, stdev=436.35, samples=14
00:11:43.608    lat (msec)   : 4=0.62%, 10=4.60%, 20=3.06%, 50=9.30%, 100=32.30%
00:11:43.608    lat (msec)   : 250=49.16%, 500=0.97%
00:11:43.608    cpu          : usr=94.58%, sys=2.13%, ctx=1868, majf=0, minf=35
00:11:43.608    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:11:43.608       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:11:43.608       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:43.608       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:43.608       latency   : target=0, window=0, percentile=100.00%, depth=512
00:11:43.608  
00:11:43.608  Run status group 0 (all jobs):
00:11:43.608     READ: bw=269MiB/s (282MB/s), 269MiB/s-269MiB/s (282MB/s-282MB/s), io=2048MiB (2147MB), run=7614-7614msec
00:11:43.608    WRITE: bw=287MiB/s (300MB/s), 287MiB/s-287MiB/s (300MB/s-300MB/s), io=2048MiB (2147MB), run=7148-7148msec
00:11:43.608  
00:11:43.608  Disk stats (read/write):
00:11:43.608    sdb: ios=11538/12122, merge=648/86, ticks=99221/49614, in_queue=148836, util=20.81%
00:11:43.608   10:39:33 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@185 -- # notice 'Shutting down virtual machines...'
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machines...'
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machines...'
00:11:43.609  INFO: Shutting down virtual machines...
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@186 -- # vm_shutdown_all
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@489 -- # vm_list_all
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@466 -- # vms=()
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@466 -- # local vms
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@492 -- # vm_shutdown 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@424 -- # vm_is_running 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@376 -- # local vm_pid
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@377 -- # vm_pid=1874389
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@379 -- # /bin/kill -0 1874389
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@380 -- # return 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:11:43.609  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@432 -- # set +e
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@338 -- # local vm_num=0
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@339 -- # shift
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:11:43.609    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:11:43.609   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:43.609  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:11:43.868  INFO: VM0 is shutting down - wait a while to complete
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@435 -- # set -e
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:11:43.868  INFO: Waiting for VMs to shutdown...
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@498 -- # vm_is_running 0
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@376 -- # local vm_pid
00:11:43.868    10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@377 -- # vm_pid=1874389
00:11:43.868   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@379 -- # /bin/kill -0 1874389
00:11:43.869   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@380 -- # return 0
00:11:43.869   10:39:33 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@500 -- # sleep 1
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@498 -- # vm_is_running 0
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@376 -- # local vm_pid
00:11:44.805    10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@377 -- # vm_pid=1874389
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@379 -- # /bin/kill -0 1874389
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@380 -- # return 0
00:11:44.805   10:39:34 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@500 -- # sleep 1
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 0
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 1
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 2
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 3
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:0 file:0
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:1 file:0
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:2 file:6714
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:3 file:17643
00:11:44.806  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vhost peer closed
00:11:45.742   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:45.742   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:45.742   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@498 -- # vm_is_running 0
00:11:45.742   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@373 -- # return 1
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:11:45.743   10:39:35 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@500 -- # sleep 1
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:11:47.120  INFO: All VMs successfully shut down
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@505 -- # return 0
00:11:47.120   10:39:36 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@187 -- # sleep 2
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@189 -- # notice 'Cleaning up vhost - remove LUNs, controllers, lvol bdevs and lvol stores'
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Cleaning up vhost - remove LUNs, controllers, lvol bdevs and lvol stores'
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Cleaning up vhost - remove LUNs, controllers, lvol bdevs and lvol stores'
00:11:49.023  INFO: Cleaning up vhost - remove LUNs, controllers, lvol bdevs and lvol stores
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@190 -- # [[ spdk_vhost_scsi == \s\p\d\k\_\v\h\o\s\t\_\s\c\s\i ]]
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( i = 0 ))
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( i < vm_count ))
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@192 -- # notice 'Removing devices from vhost SCSI controller naa.0.0'
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removing devices from vhost SCSI controller naa.0.0'
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removing devices from vhost SCSI controller naa.0.0'
00:11:49.023  INFO: Removing devices from vhost SCSI controller naa.0.0
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( j = 0 ))
00:11:49.023   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( j < 1 ))
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@194 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_scsi_controller_remove_target naa.0.0 0
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@195 -- # notice 'Removed device 0'
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removed device 0'
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removed device 0'
00:11:49.024  INFO: Removed device 0
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( j++ ))
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( j < 1 ))
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@197 -- # notice 'Removing vhost SCSI controller naa.0.0'
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removing vhost SCSI controller naa.0.0'
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removing vhost SCSI controller naa.0.0'
00:11:49.024  INFO: Removing vhost SCSI controller naa.0.0
00:11:49.024   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@198 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_delete_controller naa.0.0
00:11:49.282   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( i++ ))
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@193 -- # (( i < vm_count ))
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@210 -- # clean_lvol_cfg
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@45 -- # notice 'Removing lvol bdevs'
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removing lvol bdevs'
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removing lvol bdevs'
00:11:49.283  INFO: Removing lvol bdevs
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@46 -- # for lvol_bdev in "${lvol_bdevs[@]}"
00:11:49.283   10:39:38 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@47 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock -t 120 bdev_lvol_delete 6e03160c-5372-407f-a809-79382c3da070
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@48 -- # notice 'lvol bdev 6e03160c-5372-407f-a809-79382c3da070 removed'
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'lvol bdev 6e03160c-5372-407f-a809-79382c3da070 removed'
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: lvol bdev 6e03160c-5372-407f-a809-79382c3da070 removed'
00:11:51.194  INFO: lvol bdev 6e03160c-5372-407f-a809-79382c3da070 removed
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@51 -- # notice 'Removing lvol stores'
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removing lvol stores'
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removing lvol stores'
00:11:51.194  INFO: Removing lvol stores
00:11:51.194   10:39:40 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@52 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock -t 120 bdev_lvol_delete_lvstore -u 48a3e668-55aa-4c49-846a-57de4b525423
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@53 -- # notice 'lvol store 48a3e668-55aa-4c49-846a-57de4b525423 removed'
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'lvol store 48a3e668-55aa-4c49-846a-57de4b525423 removed'
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: lvol store 48a3e668-55aa-4c49-846a-57de4b525423 removed'
00:11:51.453  INFO: lvol store 48a3e668-55aa-4c49-846a-57de4b525423 removed
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@212 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_get_lvstores
00:11:51.453  []
00:11:51.453   10:39:41 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@213 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_get_bdevs
00:11:51.712  [
00:11:51.712    {
00:11:51.712      "name": "Nvme0n1",
00:11:51.712      "aliases": [
00:11:51.712        "36344730-5260-5497-0025-38450000011d"
00:11:51.712      ],
00:11:51.712      "product_name": "NVMe disk",
00:11:51.712      "block_size": 512,
00:11:51.712      "num_blocks": 3750748848,
00:11:51.712      "uuid": "36344730-5260-5497-0025-38450000011d",
00:11:51.712      "numa_id": 0,
00:11:51.712      "assigned_rate_limits": {
00:11:51.712        "rw_ios_per_sec": 0,
00:11:51.712        "rw_mbytes_per_sec": 0,
00:11:51.712        "r_mbytes_per_sec": 0,
00:11:51.712        "w_mbytes_per_sec": 0
00:11:51.712      },
00:11:51.712      "claimed": false,
00:11:51.712      "zoned": false,
00:11:51.712      "supported_io_types": {
00:11:51.712        "read": true,
00:11:51.712        "write": true,
00:11:51.712        "unmap": true,
00:11:51.712        "flush": true,
00:11:51.712        "reset": true,
00:11:51.712        "nvme_admin": true,
00:11:51.712        "nvme_io": true,
00:11:51.712        "nvme_io_md": false,
00:11:51.712        "write_zeroes": true,
00:11:51.712        "zcopy": false,
00:11:51.713        "get_zone_info": false,
00:11:51.713        "zone_management": false,
00:11:51.713        "zone_append": false,
00:11:51.713        "compare": true,
00:11:51.713        "compare_and_write": false,
00:11:51.713        "abort": true,
00:11:51.713        "seek_hole": false,
00:11:51.713        "seek_data": false,
00:11:51.713        "copy": false,
00:11:51.713        "nvme_iov_md": false
00:11:51.713      },
00:11:51.713      "driver_specific": {
00:11:51.713        "nvme": [
00:11:51.713          {
00:11:51.713            "pci_address": "0000:5e:00.0",
00:11:51.713            "trid": {
00:11:51.713              "trtype": "PCIe",
00:11:51.713              "traddr": "0000:5e:00.0"
00:11:51.713            },
00:11:51.713            "ctrlr_data": {
00:11:51.713              "cntlid": 6,
00:11:51.713              "vendor_id": "0x144d",
00:11:51.713              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:11:51.713              "serial_number": "S64GNE0R605497",
00:11:51.713              "firmware_revision": "GDC5302Q",
00:11:51.713              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:11:51.713              "oacs": {
00:11:51.713                "security": 1,
00:11:51.713                "format": 1,
00:11:51.713                "firmware": 1,
00:11:51.713                "ns_manage": 1
00:11:51.713              },
00:11:51.713              "multi_ctrlr": false,
00:11:51.713              "ana_reporting": false
00:11:51.713            },
00:11:51.713            "vs": {
00:11:51.713              "nvme_version": "1.4"
00:11:51.713            },
00:11:51.713            "ns_data": {
00:11:51.713              "id": 1,
00:11:51.713              "can_share": false
00:11:51.713            },
00:11:51.713            "security": {
00:11:51.713              "opal": true
00:11:51.713            }
00:11:51.713          }
00:11:51.713        ],
00:11:51.713        "mp_policy": "active_passive"
00:11:51.713      }
00:11:51.713    },
00:11:51.713    {
00:11:51.713      "name": "Nvme1n1",
00:11:51.713      "aliases": [
00:11:51.713        "bf9e8a9c-07a7-4245-83d1-2e91afd7063e"
00:11:51.713      ],
00:11:51.713      "product_name": "NVMe disk",
00:11:51.713      "block_size": 512,
00:11:51.713      "num_blocks": 732585168,
00:11:51.713      "uuid": "bf9e8a9c-07a7-4245-83d1-2e91afd7063e",
00:11:51.713      "numa_id": 1,
00:11:51.713      "assigned_rate_limits": {
00:11:51.713        "rw_ios_per_sec": 0,
00:11:51.713        "rw_mbytes_per_sec": 0,
00:11:51.713        "r_mbytes_per_sec": 0,
00:11:51.713        "w_mbytes_per_sec": 0
00:11:51.713      },
00:11:51.713      "claimed": false,
00:11:51.713      "zoned": false,
00:11:51.713      "supported_io_types": {
00:11:51.713        "read": true,
00:11:51.713        "write": true,
00:11:51.713        "unmap": true,
00:11:51.713        "flush": true,
00:11:51.713        "reset": true,
00:11:51.713        "nvme_admin": true,
00:11:51.713        "nvme_io": true,
00:11:51.713        "nvme_io_md": false,
00:11:51.713        "write_zeroes": true,
00:11:51.713        "zcopy": false,
00:11:51.713        "get_zone_info": false,
00:11:51.713        "zone_management": false,
00:11:51.713        "zone_append": false,
00:11:51.713        "compare": false,
00:11:51.713        "compare_and_write": false,
00:11:51.713        "abort": true,
00:11:51.713        "seek_hole": false,
00:11:51.713        "seek_data": false,
00:11:51.713        "copy": false,
00:11:51.713        "nvme_iov_md": false
00:11:51.713      },
00:11:51.713      "driver_specific": {
00:11:51.713        "nvme": [
00:11:51.713          {
00:11:51.713            "pci_address": "0000:af:00.0",
00:11:51.713            "trid": {
00:11:51.713              "trtype": "PCIe",
00:11:51.713              "traddr": "0000:af:00.0"
00:11:51.713            },
00:11:51.713            "ctrlr_data": {
00:11:51.713              "cntlid": 0,
00:11:51.713              "vendor_id": "0x8086",
00:11:51.713              "model_number": "INTEL SSDPED1K375GA",
00:11:51.713              "serial_number": "PHKS7481000F375AGN",
00:11:51.713              "firmware_revision": "E2010600",
00:11:51.713              "oacs": {
00:11:51.713                "security": 1,
00:11:51.713                "format": 1,
00:11:51.713                "firmware": 1,
00:11:51.713                "ns_manage": 0
00:11:51.713              },
00:11:51.713              "multi_ctrlr": false,
00:11:51.713              "ana_reporting": false
00:11:51.713            },
00:11:51.713            "vs": {
00:11:51.713              "nvme_version": "1.0"
00:11:51.713            },
00:11:51.713            "ns_data": {
00:11:51.713              "id": 1,
00:11:51.713              "can_share": false
00:11:51.713            },
00:11:51.713            "security": {
00:11:51.713              "opal": true
00:11:51.713            }
00:11:51.713          }
00:11:51.713        ],
00:11:51.713        "mp_policy": "active_passive"
00:11:51.713      }
00:11:51.713    },
00:11:51.713    {
00:11:51.713      "name": "Nvme2n1",
00:11:51.713      "aliases": [
00:11:51.713        "e3e6f570-f67d-4e01-9319-3a62f8a2d812"
00:11:51.713      ],
00:11:51.713      "product_name": "NVMe disk",
00:11:51.713      "block_size": 512,
00:11:51.713      "num_blocks": 732585168,
00:11:51.713      "uuid": "e3e6f570-f67d-4e01-9319-3a62f8a2d812",
00:11:51.713      "numa_id": 1,
00:11:51.713      "assigned_rate_limits": {
00:11:51.713        "rw_ios_per_sec": 0,
00:11:51.713        "rw_mbytes_per_sec": 0,
00:11:51.713        "r_mbytes_per_sec": 0,
00:11:51.713        "w_mbytes_per_sec": 0
00:11:51.713      },
00:11:51.713      "claimed": false,
00:11:51.713      "zoned": false,
00:11:51.713      "supported_io_types": {
00:11:51.713        "read": true,
00:11:51.713        "write": true,
00:11:51.713        "unmap": true,
00:11:51.713        "flush": true,
00:11:51.713        "reset": true,
00:11:51.713        "nvme_admin": true,
00:11:51.713        "nvme_io": true,
00:11:51.713        "nvme_io_md": false,
00:11:51.713        "write_zeroes": true,
00:11:51.713        "zcopy": false,
00:11:51.713        "get_zone_info": false,
00:11:51.713        "zone_management": false,
00:11:51.713        "zone_append": false,
00:11:51.713        "compare": false,
00:11:51.713        "compare_and_write": false,
00:11:51.713        "abort": true,
00:11:51.713        "seek_hole": false,
00:11:51.713        "seek_data": false,
00:11:51.713        "copy": false,
00:11:51.713        "nvme_iov_md": false
00:11:51.713      },
00:11:51.713      "driver_specific": {
00:11:51.713        "nvme": [
00:11:51.713          {
00:11:51.713            "pci_address": "0000:b0:00.0",
00:11:51.713            "trid": {
00:11:51.713              "trtype": "PCIe",
00:11:51.713              "traddr": "0000:b0:00.0"
00:11:51.713            },
00:11:51.713            "ctrlr_data": {
00:11:51.713              "cntlid": 0,
00:11:51.713              "vendor_id": "0x8086",
00:11:51.713              "model_number": "INTEL SSDPED1K375GA",
00:11:51.713              "serial_number": "PHKS7482004A375AGN",
00:11:51.713              "firmware_revision": "E2010600",
00:11:51.713              "oacs": {
00:11:51.713                "security": 1,
00:11:51.713                "format": 1,
00:11:51.713                "firmware": 1,
00:11:51.713                "ns_manage": 0
00:11:51.713              },
00:11:51.713              "multi_ctrlr": false,
00:11:51.713              "ana_reporting": false
00:11:51.713            },
00:11:51.713            "vs": {
00:11:51.714              "nvme_version": "1.0"
00:11:51.714            },
00:11:51.714            "ns_data": {
00:11:51.714              "id": 1,
00:11:51.714              "can_share": false
00:11:51.714            },
00:11:51.714            "security": {
00:11:51.714              "opal": true
00:11:51.714            }
00:11:51.714          }
00:11:51.714        ],
00:11:51.714        "mp_policy": "active_passive"
00:11:51.714      }
00:11:51.714    }
00:11:51.714  ]
00:11:51.714   10:39:41 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@214 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_get_controllers
00:11:51.973  []
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@216 -- # notice 'Shutting down SPDK vhost app...'
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Shutting down SPDK vhost app...'
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down SPDK vhost app...'
00:11:51.973  INFO: Shutting down SPDK vhost app...
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@217 -- # vhost_kill 0
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@202 -- # local rc=0
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@203 -- # local vhost_name=0
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@210 -- # local vhost_dir
00:11:51.973    10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@211 -- # get_vhost_dir 0
00:11:51.973    10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=0
00:11:51.973    10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:51.973    10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@220 -- # local vhost_pid
00:11:51.973    10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@221 -- # vhost_pid=1873307
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1873307) app'
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1873307) app'
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1873307) app'
00:11:51.973  INFO: killing vhost (PID 1873307) app
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@224 -- # kill -INT 1873307
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:51.973  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i = 0 ))
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1873307
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:11:51.973  .
00:11:51.973   10:39:41 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:11:52.910   10:39:42 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:11:52.910   10:39:42 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:52.910   10:39:42 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1873307
00:11:52.910   10:39:42 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:11:52.910  .
00:11:52.910   10:39:42 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:11:54.287   10:39:43 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:11:54.287   10:39:43 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:54.287   10:39:43 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1873307
00:11:54.287   10:39:43 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:11:54.287  .
00:11:54.287   10:39:43 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:11:54.933   10:39:44 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:11:54.933   10:39:44 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:54.933   10:39:44 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1873307
00:11:54.933   10:39:44 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:11:54.933  .
00:11:54.933   10:39:44 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1873307
00:11:56.314  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1873307) - No such process
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@231 -- # break
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@234 -- # kill -0 1873307
00:11:56.314  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1873307) - No such process
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@239 -- # kill -0 1873307
00:11:56.314  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1873307) - No such process
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@245 -- # is_pid_child 1873307
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1668 -- # local pid=1873307 _pid
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1670 -- # read -r _pid
00:11:56.314    10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1667 -- # jobs -pr
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1670 -- # read -r _pid
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1674 -- # return 1
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:56.314   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:56.315   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:11:56.315   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@261 -- # return 0
00:11:56.315   10:39:45 vhost.vhost_scsi_lvol_integrity -- lvol/lvol_test.sh@219 -- # vhosttestfini
00:11:56.315   10:39:45 vhost.vhost_scsi_lvol_integrity -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:11:56.315  
00:11:56.315  real	0m56.907s
00:11:56.315  user	3m31.221s
00:11:56.315  sys	0m6.725s
00:11:56.315   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:56.315   10:39:45 vhost.vhost_scsi_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:56.315  ************************************
00:11:56.315  END TEST vhost_scsi_lvol_integrity
00:11:56.315  ************************************
00:11:56.315   10:39:45 vhost -- vhost/vhost.sh@64 -- # echo 'Running lvol integrity suite...'
00:11:56.315  Running lvol integrity suite...
00:11:56.315   10:39:45 vhost -- vhost/vhost.sh@65 -- # run_test vhost_blk_lvol_integrity /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh -x --fio-bin=/usr/src/fio-static/fio --ctrl-type=spdk_vhost_blk
00:11:56.315   10:39:45 vhost -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:56.315   10:39:45 vhost -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:56.315   10:39:45 vhost -- common/autotest_common.sh@10 -- # set +x
00:11:56.315  ************************************
00:11:56.315  START TEST vhost_blk_lvol_integrity
00:11:56.315  ************************************
00:11:56.315   10:39:45 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh -x --fio-bin=/usr/src/fio-static/fio --ctrl-type=spdk_vhost_blk
00:11:56.315  * Looking for test storage...
00:11:56.315  * Found test storage at /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1693 -- # lcov --version
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@336 -- # IFS=.-:
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@336 -- # read -ra ver1
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@337 -- # IFS=.-:
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@337 -- # read -ra ver2
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@338 -- # local 'op=<'
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@340 -- # ver1_l=2
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@341 -- # ver2_l=1
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@344 -- # case "$op" in
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@345 -- # : 1
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@365 -- # decimal 1
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@353 -- # local d=1
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@355 -- # echo 1
00:11:56.315    10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@365 -- # ver1[v]=1
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@366 -- # decimal 2
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@353 -- # local d=2
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:56.315     10:39:45 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@355 -- # echo 2
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@366 -- # ver2[v]=2
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@368 -- # return 0
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:11:56.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:56.315  		--rc genhtml_branch_coverage=1
00:11:56.315  		--rc genhtml_function_coverage=1
00:11:56.315  		--rc genhtml_legend=1
00:11:56.315  		--rc geninfo_all_blocks=1
00:11:56.315  		--rc geninfo_unexecuted_blocks=1
00:11:56.315  		
00:11:56.315  		'
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:11:56.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:56.315  		--rc genhtml_branch_coverage=1
00:11:56.315  		--rc genhtml_function_coverage=1
00:11:56.315  		--rc genhtml_legend=1
00:11:56.315  		--rc geninfo_all_blocks=1
00:11:56.315  		--rc geninfo_unexecuted_blocks=1
00:11:56.315  		
00:11:56.315  		'
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:11:56.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:56.315  		--rc genhtml_branch_coverage=1
00:11:56.315  		--rc genhtml_function_coverage=1
00:11:56.315  		--rc genhtml_legend=1
00:11:56.315  		--rc geninfo_all_blocks=1
00:11:56.315  		--rc geninfo_unexecuted_blocks=1
00:11:56.315  		
00:11:56.315  		'
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:11:56.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:56.315  		--rc genhtml_branch_coverage=1
00:11:56.315  		--rc genhtml_function_coverage=1
00:11:56.315  		--rc genhtml_legend=1
00:11:56.315  		--rc geninfo_all_blocks=1
00:11:56.315  		--rc geninfo_unexecuted_blocks=1
00:11:56.315  		
00:11:56.315  		'
00:11:56.315   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@9 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@6 -- # : false
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@7 -- # : /root/vhost_test
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@9 -- # : qemu-img
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/..
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vhost-phy-autotest
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:11:56.315      10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:11:56.315    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common/autotest.config
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@2 -- # vhost_0_main_core=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:11:56.315     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/common.sh
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vhost-phy-autotest/spdk/test/event/scheduler/scheduler
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/test/scheduler/cgroups.sh
00:11:56.316      10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:11:56.316       10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/cgroups.sh@244 -- # check_cgroup
00:11:56.316       10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:11:56.316       10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:11:56.316       10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/cgroups.sh@10 -- # echo 2
00:11:56.316      10:39:46 vhost.vhost_blk_lvol_integrity -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@10 -- # source /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/common.sh
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@15 -- # shopt -s extglob
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- paths/export.sh@5 -- # export PATH
00:11:56.316     10:39:46 vhost.vhost_blk_lvol_integrity -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@12 -- # get_vhost_dir 0
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=0
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@12 -- # rpc_py='/var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@14 -- # vm_count=1
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@15 -- # ctrl_type=spdk_vhost_scsi
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@16 -- # use_fs=false
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@17 -- # distribute_cores=false
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@71 -- # set -x
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@72 -- # x=-x
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@59 -- # case "$OPTARG" in
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@61 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@57 -- # case "$optchar" in
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@59 -- # case "$OPTARG" in
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@63 -- # ctrl_type=spdk_vhost_blk
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@56 -- # getopts xh-: optchar
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@78 -- # vhosttestinit
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:56.316   10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@81 -- # source /dev/fd/62
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@81 -- # gen_cpu_vm_spdk_config 1 2 4 '' 0
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1401 -- # local vm_count=1 vm_cpu_num=2 vm
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1402 -- # local spdk_cpu_num=4 spdk_cpu_list= spdk_cpus
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1403 -- # nodes=('0')
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1403 -- # local nodes node
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1404 -- # local env
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1406 -- # spdk_cpus=spdk_cpu_num
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1407 -- # [[ -n '' ]]
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1409 -- # (( 1 > 0 ))
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1410 -- # (( 1 == 1 ))
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1410 -- # node=0
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1411 -- # (( vm = 0 ))
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1411 -- # (( vm < vm_count ))
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1412 -- # env+=("VM${vm}_NODE=${nodes[vm]:-$node}")
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1411 -- # (( vm++ ))
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1411 -- # (( vm < vm_count ))
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1416 -- # env+=("$spdk_cpus=${!spdk_cpus}")
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1417 -- # env+=("vm_count=$vm_count")
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1418 -- # env+=("vm_cpu_num=$vm_cpu_num")
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1420 -- # export VM0_NODE=0 spdk_cpu_num=4 vm_count=1 vm_cpu_num=2
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1420 -- # VM0_NODE=0
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1420 -- # spdk_cpu_num=4
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1420 -- # vm_count=1
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1420 -- # vm_cpu_num=2
00:11:56.316    10:39:46 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1422 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/vhost/conf-generator -p cpu
00:11:59.607  Requested number of SPDK CPUs allocated: 4
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- fd/62@4 -- # VM_0_qemu_mask=0,1
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- fd/62@5 -- # VM_0_qemu_numa_node=0
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- fd/62@6 -- # vhost_0_reactor_mask='[2,3,4,5]'
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- fd/62@7 -- # vhost_0_main_core=2
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@82 -- # spdk_mask='[2,3,4,5]'
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@84 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' SIGTERM SIGABRT ERR
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@86 -- # vm_kill_all
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@476 -- # local vm
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@477 -- # vm_list_all
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@466 -- # vms=()
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@466 -- # local vms
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:59.607    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@478 -- # vm_kill 0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@446 -- # return 0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@88 -- # notice 'running SPDK vhost'
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'running SPDK vhost'
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: running SPDK vhost'
00:11:59.607  INFO: running SPDK vhost
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@89 -- # vhost_run -n 0 -- --cpumask '[2,3,4,5]'
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@116 -- # local OPTIND
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@117 -- # local vhost_name
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@118 -- # local run_gen_nvme=true
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@119 -- # local vhost_bin=vhost
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@120 -- # vhost_args=()
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@120 -- # local vhost_args
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@121 -- # cmd=()
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@121 -- # local cmd
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@124 -- # case "$optchar" in
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@125 -- # vhost_name=0
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@123 -- # getopts n:b:g optchar
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@137 -- # shift 3
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@139 -- # vhost_args=("$@")
00:11:59.607   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@141 -- # [[ -z 0 ]]
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@146 -- # local vhost_dir
00:11:59.608    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@147 -- # get_vhost_dir 0
00:11:59.608    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=0
00:11:59.608    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:59.608    10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@147 -- # vhost_dir=/root/vhost_test/vhost/0
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@148 -- # local vhost_app=/var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@149 -- # local vhost_log_file=/root/vhost_test/vhost/0/vhost.log
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@150 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@151 -- # local vhost_socket=/root/vhost_test/vhost/0/usvhost
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@152 -- # notice 'starting vhost app in background'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'starting vhost app in background'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: starting vhost app in background'
00:11:59.608  INFO: starting vhost app in background
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@153 -- # [[ -r /root/vhost_test/vhost/0/vhost.pid ]]
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@154 -- # [[ -d /root/vhost_test/vhost/0 ]]
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@155 -- # mkdir -p /root/vhost_test/vhost/0
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@157 -- # [[ ! -x /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost ]]
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@162 -- # cmd=("$vhost_app" "-r" "$vhost_dir/rpc.sock" "${vhost_args[@]}")
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@163 -- # [[ vhost =~ vhost ]]
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@164 -- # cmd+=(-S "$vhost_dir")
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@167 -- # notice 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log'
00:11:59.608  INFO: Logging to:   /root/vhost_test/vhost/0/vhost.log
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@168 -- # notice 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Socket:      /root/vhost_test/vhost/0/usvhost'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Socket:      /root/vhost_test/vhost/0/usvhost'
00:11:59.608  INFO: Socket:      /root/vhost_test/vhost/0/usvhost
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@169 -- # notice 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0'
00:11:59.608  INFO: Command:     /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@171 -- # timing_enter vhost_start
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@173 -- # iobuf_small_count=16383
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@174 -- # iobuf_large_count=2047
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@177 -- # vhost_pid=1882490
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@178 -- # echo 1882490
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@180 -- # notice 'waiting for app to run...'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'waiting for app to run...'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@176 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask '[2,3,4,5]' -S /root/vhost_test/vhost/0 --wait-for-rpc
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: waiting for app to run...'
00:11:59.608  INFO: waiting for app to run...
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@181 -- # waitforlisten 1882490 /root/vhost_test/vhost/0/rpc.sock
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@835 -- # '[' -z 1882490 ']'
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:11:59.608  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:59.608   10:39:49 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:11:59.608  [2024-11-19 10:39:49.269124] Starting SPDK v25.01-pre git sha1 a0c128549 / DPDK 24.03.0 initialization...
00:11:59.608  [2024-11-19 10:39:49.269255] [ DPDK EAL parameters: vhost --no-shconf -l 2,3,4,5 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882490 ]
00:11:59.608  EAL: No free 2048 kB hugepages reported on node 1
00:11:59.867  [2024-11-19 10:39:49.406367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:59.867  [2024-11-19 10:39:49.513594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:59.867  [2024-11-19 10:39:49.513701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:11:59.867  [2024-11-19 10:39:49.513812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:11:59.867  [2024-11-19 10:39:49.513787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:00.437   10:39:50 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:00.437   10:39:50 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@868 -- # return 0
00:12:00.437   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@183 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock iobuf_set_options --small-pool-count=16383 --large-pool-count=2047
00:12:00.696   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@188 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock framework_start_init
00:12:01.263   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0 != *\-\-\n\o\-\p\c\i* ]]
00:12:01.263   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@192 -- # [[ /var/jenkins/workspace/vhost-phy-autotest/spdk/build/bin/vhost -r /root/vhost_test/vhost/0/rpc.sock --cpumask [2,3,4,5] -S /root/vhost_test/vhost/0 != *\-\u* ]]
00:12:01.263   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@192 -- # true
00:12:01.263   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/gen_nvme.sh
00:12:01.263   10:39:50 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@193 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@196 -- # notice 'vhost started - pid=1882490'
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'vhost started - pid=1882490'
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: vhost started - pid=1882490'
00:12:02.641  INFO: vhost started - pid=1882490
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@198 -- # timing_exit vhost_start
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@90 -- # notice ...
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO ...
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: ...'
00:12:02.641  INFO: ...
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@92 -- # trap 'clean_lvol_cfg; error_exit "${FUNCNAME}" "${LINENO}"' SIGTERM SIGABRT ERR
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@94 -- # lvol_bdevs=()
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@95 -- # used_vms=
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@97 -- # id=0
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@99 -- # notice 'Creating lvol store on device Nvme0n1'
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Creating lvol store on device Nvme0n1'
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:02.641   10:39:52 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Creating lvol store on device Nvme0n1'
00:12:02.641  INFO: Creating lvol store on device Nvme0n1
00:12:02.641    10:39:52 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@100 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create_lvstore Nvme0n1 lvs_0 -c 4194304
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@100 -- # ls_guid=b230a30c-0e18-4157-b337-40b03d91a0e1
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j = 0 ))
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j < vm_count ))
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@103 -- # notice 'Creating lvol bdev for VM 0 on lvol store b230a30c-0e18-4157-b337-40b03d91a0e1'
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Creating lvol bdev for VM 0 on lvol store b230a30c-0e18-4157-b337-40b03d91a0e1'
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:03.579   10:39:53 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Creating lvol bdev for VM 0 on lvol store b230a30c-0e18-4157-b337-40b03d91a0e1'
00:12:03.579  INFO: Creating lvol bdev for VM 0 on lvol store b230a30c-0e18-4157-b337-40b03d91a0e1
00:12:03.579    10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@104 -- # get_lvs_free_mb b230a30c-0e18-4157-b337-40b03d91a0e1
00:12:03.579    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1368 -- # local lvs_uuid=b230a30c-0e18-4157-b337-40b03d91a0e1
00:12:03.579    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1369 -- # local lvs_info
00:12:03.579    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1370 -- # local fc
00:12:03.579    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1371 -- # local cs
00:12:03.579     10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_get_lvstores
00:12:03.838    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1372 -- # lvs_info='[
00:12:03.838    {
00:12:03.838      "uuid": "b230a30c-0e18-4157-b337-40b03d91a0e1",
00:12:03.838      "name": "lvs_0",
00:12:03.838      "base_bdev": "Nvme0n1",
00:12:03.838      "total_data_clusters": 457407,
00:12:03.838      "free_clusters": 457407,
00:12:03.838      "block_size": 512,
00:12:03.838      "cluster_size": 4194304
00:12:03.838    }
00:12:03.838  ]'
00:12:03.838     10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="b230a30c-0e18-4157-b337-40b03d91a0e1") .free_clusters'
00:12:03.838    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1373 -- # fc=457407
00:12:03.838     10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="b230a30c-0e18-4157-b337-40b03d91a0e1") .cluster_size'
00:12:04.098    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1374 -- # cs=4194304
00:12:04.098    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1377 -- # free_mb=1829628
00:12:04.098    10:39:53 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1378 -- # echo 1829628
00:12:04.098   10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@104 -- # free_mb=1829628
00:12:04.098   10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@105 -- # size=1829628
00:12:04.098    10:39:53 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@106 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_create -u b230a30c-0e18-4157-b337-40b03d91a0e1 lbd_vm_0 1829628
00:12:06.634   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@106 -- # lb_name=60507f04-c1ef-46ba-aad8-1a3324c47e26
00:12:06.634   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@107 -- # lvol_bdevs+=("$lb_name")
00:12:06.634   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j++ ))
00:12:06.634   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@102 -- # (( j < vm_count ))
00:12:06.634    10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@110 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_get_bdevs
00:12:06.634   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@110 -- # bdev_info='[
00:12:06.634    {
00:12:06.634      "name": "Nvme0n1",
00:12:06.634      "aliases": [
00:12:06.634        "36344730-5260-5497-0025-38450000011d"
00:12:06.634      ],
00:12:06.634      "product_name": "NVMe disk",
00:12:06.634      "block_size": 512,
00:12:06.634      "num_blocks": 3750748848,
00:12:06.634      "uuid": "36344730-5260-5497-0025-38450000011d",
00:12:06.634      "numa_id": 0,
00:12:06.634      "assigned_rate_limits": {
00:12:06.634        "rw_ios_per_sec": 0,
00:12:06.634        "rw_mbytes_per_sec": 0,
00:12:06.634        "r_mbytes_per_sec": 0,
00:12:06.634        "w_mbytes_per_sec": 0
00:12:06.634      },
00:12:06.634      "claimed": true,
00:12:06.634      "claim_type": "read_many_write_one",
00:12:06.634      "zoned": false,
00:12:06.634      "supported_io_types": {
00:12:06.634        "read": true,
00:12:06.634        "write": true,
00:12:06.634        "unmap": true,
00:12:06.634        "flush": true,
00:12:06.634        "reset": true,
00:12:06.634        "nvme_admin": true,
00:12:06.634        "nvme_io": true,
00:12:06.634        "nvme_io_md": false,
00:12:06.634        "write_zeroes": true,
00:12:06.634        "zcopy": false,
00:12:06.634        "get_zone_info": false,
00:12:06.634        "zone_management": false,
00:12:06.634        "zone_append": false,
00:12:06.634        "compare": true,
00:12:06.634        "compare_and_write": false,
00:12:06.634        "abort": true,
00:12:06.634        "seek_hole": false,
00:12:06.634        "seek_data": false,
00:12:06.634        "copy": false,
00:12:06.634        "nvme_iov_md": false
00:12:06.634      },
00:12:06.634      "driver_specific": {
00:12:06.634        "nvme": [
00:12:06.634          {
00:12:06.634            "pci_address": "0000:5e:00.0",
00:12:06.634            "trid": {
00:12:06.634              "trtype": "PCIe",
00:12:06.634              "traddr": "0000:5e:00.0"
00:12:06.634            },
00:12:06.634            "ctrlr_data": {
00:12:06.634              "cntlid": 6,
00:12:06.634              "vendor_id": "0x144d",
00:12:06.634              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:12:06.634              "serial_number": "S64GNE0R605497",
00:12:06.634              "firmware_revision": "GDC5302Q",
00:12:06.634              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:12:06.634              "oacs": {
00:12:06.634                "security": 1,
00:12:06.634                "format": 1,
00:12:06.634                "firmware": 1,
00:12:06.634                "ns_manage": 1
00:12:06.634              },
00:12:06.634              "multi_ctrlr": false,
00:12:06.634              "ana_reporting": false
00:12:06.634            },
00:12:06.634            "vs": {
00:12:06.634              "nvme_version": "1.4"
00:12:06.634            },
00:12:06.634            "ns_data": {
00:12:06.634              "id": 1,
00:12:06.634              "can_share": false
00:12:06.634            },
00:12:06.634            "security": {
00:12:06.634              "opal": true
00:12:06.634            }
00:12:06.634          }
00:12:06.634        ],
00:12:06.634        "mp_policy": "active_passive"
00:12:06.634      }
00:12:06.634    },
00:12:06.634    {
00:12:06.634      "name": "Nvme1n1",
00:12:06.634      "aliases": [
00:12:06.634        "9592c0a1-5520-4ec4-96af-f86e44990254"
00:12:06.634      ],
00:12:06.634      "product_name": "NVMe disk",
00:12:06.634      "block_size": 512,
00:12:06.634      "num_blocks": 732585168,
00:12:06.634      "uuid": "9592c0a1-5520-4ec4-96af-f86e44990254",
00:12:06.634      "numa_id": 1,
00:12:06.634      "assigned_rate_limits": {
00:12:06.634        "rw_ios_per_sec": 0,
00:12:06.634        "rw_mbytes_per_sec": 0,
00:12:06.634        "r_mbytes_per_sec": 0,
00:12:06.634        "w_mbytes_per_sec": 0
00:12:06.634      },
00:12:06.634      "claimed": false,
00:12:06.634      "zoned": false,
00:12:06.634      "supported_io_types": {
00:12:06.634        "read": true,
00:12:06.634        "write": true,
00:12:06.634        "unmap": true,
00:12:06.634        "flush": true,
00:12:06.634        "reset": true,
00:12:06.634        "nvme_admin": true,
00:12:06.634        "nvme_io": true,
00:12:06.634        "nvme_io_md": false,
00:12:06.634        "write_zeroes": true,
00:12:06.634        "zcopy": false,
00:12:06.634        "get_zone_info": false,
00:12:06.634        "zone_management": false,
00:12:06.634        "zone_append": false,
00:12:06.634        "compare": false,
00:12:06.634        "compare_and_write": false,
00:12:06.634        "abort": true,
00:12:06.634        "seek_hole": false,
00:12:06.634        "seek_data": false,
00:12:06.634        "copy": false,
00:12:06.634        "nvme_iov_md": false
00:12:06.634      },
00:12:06.634      "driver_specific": {
00:12:06.634        "nvme": [
00:12:06.634          {
00:12:06.634            "pci_address": "0000:af:00.0",
00:12:06.634            "trid": {
00:12:06.634              "trtype": "PCIe",
00:12:06.634              "traddr": "0000:af:00.0"
00:12:06.634            },
00:12:06.634            "ctrlr_data": {
00:12:06.634              "cntlid": 0,
00:12:06.634              "vendor_id": "0x8086",
00:12:06.634              "model_number": "INTEL SSDPED1K375GA",
00:12:06.634              "serial_number": "PHKS7481000F375AGN",
00:12:06.634              "firmware_revision": "E2010600",
00:12:06.634              "oacs": {
00:12:06.634                "security": 1,
00:12:06.634                "format": 1,
00:12:06.634                "firmware": 1,
00:12:06.634                "ns_manage": 0
00:12:06.634              },
00:12:06.634              "multi_ctrlr": false,
00:12:06.634              "ana_reporting": false
00:12:06.634            },
00:12:06.634            "vs": {
00:12:06.634              "nvme_version": "1.0"
00:12:06.634            },
00:12:06.635            "ns_data": {
00:12:06.635              "id": 1,
00:12:06.635              "can_share": false
00:12:06.635            },
00:12:06.635            "security": {
00:12:06.635              "opal": true
00:12:06.635            }
00:12:06.635          }
00:12:06.635        ],
00:12:06.635        "mp_policy": "active_passive"
00:12:06.635      }
00:12:06.635    },
00:12:06.635    {
00:12:06.635      "name": "Nvme2n1",
00:12:06.635      "aliases": [
00:12:06.635        "7c00a828-63f7-4062-9701-c601edaee97d"
00:12:06.635      ],
00:12:06.635      "product_name": "NVMe disk",
00:12:06.635      "block_size": 512,
00:12:06.635      "num_blocks": 732585168,
00:12:06.635      "uuid": "7c00a828-63f7-4062-9701-c601edaee97d",
00:12:06.635      "numa_id": 1,
00:12:06.635      "assigned_rate_limits": {
00:12:06.635        "rw_ios_per_sec": 0,
00:12:06.635        "rw_mbytes_per_sec": 0,
00:12:06.635        "r_mbytes_per_sec": 0,
00:12:06.635        "w_mbytes_per_sec": 0
00:12:06.635      },
00:12:06.635      "claimed": false,
00:12:06.635      "zoned": false,
00:12:06.635      "supported_io_types": {
00:12:06.635        "read": true,
00:12:06.635        "write": true,
00:12:06.635        "unmap": true,
00:12:06.635        "flush": true,
00:12:06.635        "reset": true,
00:12:06.635        "nvme_admin": true,
00:12:06.635        "nvme_io": true,
00:12:06.635        "nvme_io_md": false,
00:12:06.635        "write_zeroes": true,
00:12:06.635        "zcopy": false,
00:12:06.635        "get_zone_info": false,
00:12:06.635        "zone_management": false,
00:12:06.635        "zone_append": false,
00:12:06.635        "compare": false,
00:12:06.635        "compare_and_write": false,
00:12:06.635        "abort": true,
00:12:06.635        "seek_hole": false,
00:12:06.635        "seek_data": false,
00:12:06.635        "copy": false,
00:12:06.635        "nvme_iov_md": false
00:12:06.635      },
00:12:06.635      "driver_specific": {
00:12:06.635        "nvme": [
00:12:06.635          {
00:12:06.635            "pci_address": "0000:b0:00.0",
00:12:06.635            "trid": {
00:12:06.635              "trtype": "PCIe",
00:12:06.635              "traddr": "0000:b0:00.0"
00:12:06.635            },
00:12:06.635            "ctrlr_data": {
00:12:06.635              "cntlid": 0,
00:12:06.635              "vendor_id": "0x8086",
00:12:06.635              "model_number": "INTEL SSDPED1K375GA",
00:12:06.635              "serial_number": "PHKS7482004A375AGN",
00:12:06.635              "firmware_revision": "E2010600",
00:12:06.635              "oacs": {
00:12:06.635                "security": 1,
00:12:06.635                "format": 1,
00:12:06.635                "firmware": 1,
00:12:06.635                "ns_manage": 0
00:12:06.635              },
00:12:06.635              "multi_ctrlr": false,
00:12:06.635              "ana_reporting": false
00:12:06.635            },
00:12:06.635            "vs": {
00:12:06.635              "nvme_version": "1.0"
00:12:06.635            },
00:12:06.635            "ns_data": {
00:12:06.635              "id": 1,
00:12:06.635              "can_share": false
00:12:06.635            },
00:12:06.635            "security": {
00:12:06.635              "opal": true
00:12:06.635            }
00:12:06.635          }
00:12:06.635        ],
00:12:06.635        "mp_policy": "active_passive"
00:12:06.635      }
00:12:06.635    },
00:12:06.635    {
00:12:06.635      "name": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:06.635      "aliases": [
00:12:06.635        "lvs_0/lbd_vm_0"
00:12:06.635      ],
00:12:06.635      "product_name": "Logical Volume",
00:12:06.635      "block_size": 512,
00:12:06.635      "num_blocks": 3747078144,
00:12:06.635      "uuid": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:06.635      "assigned_rate_limits": {
00:12:06.635        "rw_ios_per_sec": 0,
00:12:06.635        "rw_mbytes_per_sec": 0,
00:12:06.635        "r_mbytes_per_sec": 0,
00:12:06.635        "w_mbytes_per_sec": 0
00:12:06.635      },
00:12:06.635      "claimed": false,
00:12:06.635      "zoned": false,
00:12:06.635      "supported_io_types": {
00:12:06.635        "read": true,
00:12:06.635        "write": true,
00:12:06.635        "unmap": true,
00:12:06.635        "flush": false,
00:12:06.635        "reset": true,
00:12:06.635        "nvme_admin": false,
00:12:06.635        "nvme_io": false,
00:12:06.635        "nvme_io_md": false,
00:12:06.635        "write_zeroes": true,
00:12:06.635        "zcopy": false,
00:12:06.635        "get_zone_info": false,
00:12:06.635        "zone_management": false,
00:12:06.635        "zone_append": false,
00:12:06.635        "compare": false,
00:12:06.635        "compare_and_write": false,
00:12:06.635        "abort": false,
00:12:06.635        "seek_hole": true,
00:12:06.635        "seek_data": true,
00:12:06.635        "copy": false,
00:12:06.635        "nvme_iov_md": false
00:12:06.635      },
00:12:06.635      "driver_specific": {
00:12:06.635        "lvol": {
00:12:06.635          "lvol_store_uuid": "b230a30c-0e18-4157-b337-40b03d91a0e1",
00:12:06.635          "base_bdev": "Nvme0n1",
00:12:06.635          "thin_provision": false,
00:12:06.635          "num_allocated_clusters": 457407,
00:12:06.635          "snapshot": false,
00:12:06.635          "clone": false,
00:12:06.635          "esnap_clone": false
00:12:06.635        }
00:12:06.635      }
00:12:06.635    }
00:12:06.635  ]'
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@111 -- # notice 'Configuration after initial set-up:'
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Configuration after initial set-up:'
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Configuration after initial set-up:'
00:12:06.635  INFO: Configuration after initial set-up:
00:12:06.635   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@112 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_lvol_get_lvstores
00:12:06.896  [
00:12:06.896    {
00:12:06.896      "uuid": "b230a30c-0e18-4157-b337-40b03d91a0e1",
00:12:06.896      "name": "lvs_0",
00:12:06.896      "base_bdev": "Nvme0n1",
00:12:06.896      "total_data_clusters": 457407,
00:12:06.896      "free_clusters": 0,
00:12:06.896      "block_size": 512,
00:12:06.896      "cluster_size": 4194304
00:12:06.896    }
00:12:06.896  ]
00:12:06.896   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@113 -- # echo '[
00:12:06.896    {
00:12:06.896      "name": "Nvme0n1",
00:12:06.896      "aliases": [
00:12:06.896        "36344730-5260-5497-0025-38450000011d"
00:12:06.896      ],
00:12:06.896      "product_name": "NVMe disk",
00:12:06.896      "block_size": 512,
00:12:06.896      "num_blocks": 3750748848,
00:12:06.896      "uuid": "36344730-5260-5497-0025-38450000011d",
00:12:06.896      "numa_id": 0,
00:12:06.896      "assigned_rate_limits": {
00:12:06.896        "rw_ios_per_sec": 0,
00:12:06.896        "rw_mbytes_per_sec": 0,
00:12:06.896        "r_mbytes_per_sec": 0,
00:12:06.896        "w_mbytes_per_sec": 0
00:12:06.896      },
00:12:06.896      "claimed": true,
00:12:06.896      "claim_type": "read_many_write_one",
00:12:06.896      "zoned": false,
00:12:06.896      "supported_io_types": {
00:12:06.896        "read": true,
00:12:06.896        "write": true,
00:12:06.896        "unmap": true,
00:12:06.896        "flush": true,
00:12:06.896        "reset": true,
00:12:06.896        "nvme_admin": true,
00:12:06.896        "nvme_io": true,
00:12:06.896        "nvme_io_md": false,
00:12:06.896        "write_zeroes": true,
00:12:06.896        "zcopy": false,
00:12:06.896        "get_zone_info": false,
00:12:06.896        "zone_management": false,
00:12:06.896        "zone_append": false,
00:12:06.896        "compare": true,
00:12:06.896        "compare_and_write": false,
00:12:06.896        "abort": true,
00:12:06.896        "seek_hole": false,
00:12:06.896        "seek_data": false,
00:12:06.896        "copy": false,
00:12:06.896        "nvme_iov_md": false
00:12:06.896      },
00:12:06.896      "driver_specific": {
00:12:06.896        "nvme": [
00:12:06.896          {
00:12:06.896            "pci_address": "0000:5e:00.0",
00:12:06.896            "trid": {
00:12:06.896              "trtype": "PCIe",
00:12:06.896              "traddr": "0000:5e:00.0"
00:12:06.896            },
00:12:06.896            "ctrlr_data": {
00:12:06.896              "cntlid": 6,
00:12:06.896              "vendor_id": "0x144d",
00:12:06.896              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:12:06.896              "serial_number": "S64GNE0R605497",
00:12:06.896              "firmware_revision": "GDC5302Q",
00:12:06.896              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:12:06.896              "oacs": {
00:12:06.896                "security": 1,
00:12:06.896                "format": 1,
00:12:06.896                "firmware": 1,
00:12:06.896                "ns_manage": 1
00:12:06.896              },
00:12:06.896              "multi_ctrlr": false,
00:12:06.896              "ana_reporting": false
00:12:06.896            },
00:12:06.896            "vs": {
00:12:06.896              "nvme_version": "1.4"
00:12:06.896            },
00:12:06.896            "ns_data": {
00:12:06.896              "id": 1,
00:12:06.896              "can_share": false
00:12:06.896            },
00:12:06.896            "security": {
00:12:06.896              "opal": true
00:12:06.896            }
00:12:06.896          }
00:12:06.896        ],
00:12:06.896        "mp_policy": "active_passive"
00:12:06.896      }
00:12:06.896    },
00:12:06.896    {
00:12:06.896      "name": "Nvme1n1",
00:12:06.896      "aliases": [
00:12:06.896        "9592c0a1-5520-4ec4-96af-f86e44990254"
00:12:06.896      ],
00:12:06.896      "product_name": "NVMe disk",
00:12:06.896      "block_size": 512,
00:12:06.896      "num_blocks": 732585168,
00:12:06.896      "uuid": "9592c0a1-5520-4ec4-96af-f86e44990254",
00:12:06.896      "numa_id": 1,
00:12:06.896      "assigned_rate_limits": {
00:12:06.896        "rw_ios_per_sec": 0,
00:12:06.896        "rw_mbytes_per_sec": 0,
00:12:06.896        "r_mbytes_per_sec": 0,
00:12:06.896        "w_mbytes_per_sec": 0
00:12:06.896      },
00:12:06.896      "claimed": false,
00:12:06.896      "zoned": false,
00:12:06.896      "supported_io_types": {
00:12:06.896        "read": true,
00:12:06.896        "write": true,
00:12:06.896        "unmap": true,
00:12:06.896        "flush": true,
00:12:06.896        "reset": true,
00:12:06.896        "nvme_admin": true,
00:12:06.896        "nvme_io": true,
00:12:06.896        "nvme_io_md": false,
00:12:06.896        "write_zeroes": true,
00:12:06.896        "zcopy": false,
00:12:06.896        "get_zone_info": false,
00:12:06.896        "zone_management": false,
00:12:06.896        "zone_append": false,
00:12:06.896        "compare": false,
00:12:06.896        "compare_and_write": false,
00:12:06.896        "abort": true,
00:12:06.896        "seek_hole": false,
00:12:06.896        "seek_data": false,
00:12:06.896        "copy": false,
00:12:06.896        "nvme_iov_md": false
00:12:06.896      },
00:12:06.896      "driver_specific": {
00:12:06.896        "nvme": [
00:12:06.896          {
00:12:06.896            "pci_address": "0000:af:00.0",
00:12:06.896            "trid": {
00:12:06.896              "trtype": "PCIe",
00:12:06.896              "traddr": "0000:af:00.0"
00:12:06.896            },
00:12:06.896            "ctrlr_data": {
00:12:06.896              "cntlid": 0,
00:12:06.896              "vendor_id": "0x8086",
00:12:06.896              "model_number": "INTEL SSDPED1K375GA",
00:12:06.896              "serial_number": "PHKS7481000F375AGN",
00:12:06.896              "firmware_revision": "E2010600",
00:12:06.896              "oacs": {
00:12:06.896                "security": 1,
00:12:06.896                "format": 1,
00:12:06.896                "firmware": 1,
00:12:06.896                "ns_manage": 0
00:12:06.896              },
00:12:06.896              "multi_ctrlr": false,
00:12:06.896              "ana_reporting": false
00:12:06.896            },
00:12:06.896            "vs": {
00:12:06.896              "nvme_version": "1.0"
00:12:06.896            },
00:12:06.896            "ns_data": {
00:12:06.896              "id": 1,
00:12:06.896              "can_share": false
00:12:06.896            },
00:12:06.896            "security": {
00:12:06.896              "opal": true
00:12:06.896            }
00:12:06.896          }
00:12:06.896        ],
00:12:06.896        "mp_policy": "active_passive"
00:12:06.896      }
00:12:06.896    },
00:12:06.896    {
00:12:06.896      "name": "Nvme2n1",
00:12:06.896      "aliases": [
00:12:06.896        "7c00a828-63f7-4062-9701-c601edaee97d"
00:12:06.896      ],
00:12:06.896      "product_name": "NVMe disk",
00:12:06.896      "block_size": 512,
00:12:06.896      "num_blocks": 732585168,
00:12:06.896      "uuid": "7c00a828-63f7-4062-9701-c601edaee97d",
00:12:06.896      "numa_id": 1,
00:12:06.896      "assigned_rate_limits": {
00:12:06.896        "rw_ios_per_sec": 0,
00:12:06.896        "rw_mbytes_per_sec": 0,
00:12:06.896        "r_mbytes_per_sec": 0,
00:12:06.896        "w_mbytes_per_sec": 0
00:12:06.896      },
00:12:06.896      "claimed": false,
00:12:06.896      "zoned": false,
00:12:06.896      "supported_io_types": {
00:12:06.896        "read": true,
00:12:06.896        "write": true,
00:12:06.896        "unmap": true,
00:12:06.896        "flush": true,
00:12:06.896        "reset": true,
00:12:06.896        "nvme_admin": true,
00:12:06.896        "nvme_io": true,
00:12:06.896        "nvme_io_md": false,
00:12:06.896        "write_zeroes": true,
00:12:06.896        "zcopy": false,
00:12:06.896        "get_zone_info": false,
00:12:06.896        "zone_management": false,
00:12:06.896        "zone_append": false,
00:12:06.896        "compare": false,
00:12:06.896        "compare_and_write": false,
00:12:06.896        "abort": true,
00:12:06.896        "seek_hole": false,
00:12:06.896        "seek_data": false,
00:12:06.896        "copy": false,
00:12:06.896        "nvme_iov_md": false
00:12:06.896      },
00:12:06.896      "driver_specific": {
00:12:06.896        "nvme": [
00:12:06.896          {
00:12:06.896            "pci_address": "0000:b0:00.0",
00:12:06.896            "trid": {
00:12:06.896              "trtype": "PCIe",
00:12:06.897              "traddr": "0000:b0:00.0"
00:12:06.897            },
00:12:06.897            "ctrlr_data": {
00:12:06.897              "cntlid": 0,
00:12:06.897              "vendor_id": "0x8086",
00:12:06.897              "model_number": "INTEL SSDPED1K375GA",
00:12:06.897              "serial_number": "PHKS7482004A375AGN",
00:12:06.897              "firmware_revision": "E2010600",
00:12:06.897              "oacs": {
00:12:06.897                "security": 1,
00:12:06.897                "format": 1,
00:12:06.897                "firmware": 1,
00:12:06.897                "ns_manage": 0
00:12:06.897              },
00:12:06.897              "multi_ctrlr": false,
00:12:06.897              "ana_reporting": false
00:12:06.897            },
00:12:06.897            "vs": {
00:12:06.897              "nvme_version": "1.0"
00:12:06.897            },
00:12:06.897            "ns_data": {
00:12:06.897              "id": 1,
00:12:06.897              "can_share": false
00:12:06.897            },
00:12:06.897            "security": {
00:12:06.897              "opal": true
00:12:06.897            }
00:12:06.897          }
00:12:06.897        ],
00:12:06.897        "mp_policy": "active_passive"
00:12:06.897      }
00:12:06.897    },
00:12:06.897    {
00:12:06.897      "name": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:06.897      "aliases": [
00:12:06.897        "lvs_0/lbd_vm_0"
00:12:06.897      ],
00:12:06.897      "product_name": "Logical Volume",
00:12:06.897      "block_size": 512,
00:12:06.897      "num_blocks": 3747078144,
00:12:06.897      "uuid": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:06.897      "assigned_rate_limits": {
00:12:06.897        "rw_ios_per_sec": 0,
00:12:06.897        "rw_mbytes_per_sec": 0,
00:12:06.897        "r_mbytes_per_sec": 0,
00:12:06.897        "w_mbytes_per_sec": 0
00:12:06.897      },
00:12:06.897      "claimed": false,
00:12:06.897      "zoned": false,
00:12:06.897      "supported_io_types": {
00:12:06.897        "read": true,
00:12:06.897        "write": true,
00:12:06.897        "unmap": true,
00:12:06.897        "flush": false,
00:12:06.897        "reset": true,
00:12:06.897        "nvme_admin": false,
00:12:06.897        "nvme_io": false,
00:12:06.897        "nvme_io_md": false,
00:12:06.897        "write_zeroes": true,
00:12:06.897        "zcopy": false,
00:12:06.897        "get_zone_info": false,
00:12:06.897        "zone_management": false,
00:12:06.897        "zone_append": false,
00:12:06.897        "compare": false,
00:12:06.897        "compare_and_write": false,
00:12:06.897        "abort": false,
00:12:06.897        "seek_hole": true,
00:12:06.897        "seek_data": true,
00:12:06.897        "copy": false,
00:12:06.897        "nvme_iov_md": false
00:12:06.897      },
00:12:06.897      "driver_specific": {
00:12:06.897        "lvol": {
00:12:06.897          "lvol_store_uuid": "b230a30c-0e18-4157-b337-40b03d91a0e1",
00:12:06.897          "base_bdev": "Nvme0n1",
00:12:06.897          "thin_provision": false,
00:12:06.897          "num_allocated_clusters": 457407,
00:12:06.897          "snapshot": false,
00:12:06.897          "clone": false,
00:12:06.897          "esnap_clone": false
00:12:06.897        }
00:12:06.897      }
00:12:06.897    }
00:12:06.897  ]'
00:12:06.897  [
00:12:06.897    {
00:12:06.897      "name": "Nvme0n1",
00:12:06.897      "aliases": [
00:12:06.897        "36344730-5260-5497-0025-38450000011d"
00:12:06.897      ],
00:12:06.897      "product_name": "NVMe disk",
00:12:06.897      "block_size": 512,
00:12:06.897      "num_blocks": 3750748848,
00:12:06.897      "uuid": "36344730-5260-5497-0025-38450000011d",
00:12:06.897      "numa_id": 0,
00:12:06.897      "assigned_rate_limits": {
00:12:06.897        "rw_ios_per_sec": 0,
00:12:06.897        "rw_mbytes_per_sec": 0,
00:12:06.897        "r_mbytes_per_sec": 0,
00:12:06.897        "w_mbytes_per_sec": 0
00:12:06.897      },
00:12:06.897      "claimed": true,
00:12:06.897      "claim_type": "read_many_write_one",
00:12:06.897      "zoned": false,
00:12:06.897      "supported_io_types": {
00:12:06.897        "read": true,
00:12:06.897        "write": true,
00:12:06.897        "unmap": true,
00:12:06.897        "flush": true,
00:12:06.897        "reset": true,
00:12:06.897        "nvme_admin": true,
00:12:06.897        "nvme_io": true,
00:12:06.897        "nvme_io_md": false,
00:12:06.897        "write_zeroes": true,
00:12:06.897        "zcopy": false,
00:12:06.897        "get_zone_info": false,
00:12:06.897        "zone_management": false,
00:12:06.897        "zone_append": false,
00:12:06.897        "compare": true,
00:12:06.897        "compare_and_write": false,
00:12:06.897        "abort": true,
00:12:06.897        "seek_hole": false,
00:12:06.897        "seek_data": false,
00:12:06.897        "copy": false,
00:12:06.897        "nvme_iov_md": false
00:12:06.897      },
00:12:06.897      "driver_specific": {
00:12:06.897        "nvme": [
00:12:06.897          {
00:12:06.897            "pci_address": "0000:5e:00.0",
00:12:06.897            "trid": {
00:12:06.897              "trtype": "PCIe",
00:12:06.897              "traddr": "0000:5e:00.0"
00:12:06.897            },
00:12:06.897            "ctrlr_data": {
00:12:06.897              "cntlid": 6,
00:12:06.897              "vendor_id": "0x144d",
00:12:06.897              "model_number": "SAMSUNG MZQL21T9HCJR-00A07",
00:12:06.897              "serial_number": "S64GNE0R605497",
00:12:06.897              "firmware_revision": "GDC5302Q",
00:12:06.897              "subnqn": "nqn.1994-11.com.samsung:nvme:PM9A3:2.5-inch:S64GNE0R605497      ",
00:12:06.897              "oacs": {
00:12:06.897                "security": 1,
00:12:06.897                "format": 1,
00:12:06.897                "firmware": 1,
00:12:06.897                "ns_manage": 1
00:12:06.897              },
00:12:06.897              "multi_ctrlr": false,
00:12:06.897              "ana_reporting": false
00:12:06.897            },
00:12:06.897            "vs": {
00:12:06.897              "nvme_version": "1.4"
00:12:06.897            },
00:12:06.897            "ns_data": {
00:12:06.897              "id": 1,
00:12:06.897              "can_share": false
00:12:06.897            },
00:12:06.897            "security": {
00:12:06.897              "opal": true
00:12:06.897            }
00:12:06.897          }
00:12:06.897        ],
00:12:06.897        "mp_policy": "active_passive"
00:12:06.897      }
00:12:06.897    },
00:12:06.897    {
00:12:06.897      "name": "Nvme1n1",
00:12:06.897      "aliases": [
00:12:06.897        "9592c0a1-5520-4ec4-96af-f86e44990254"
00:12:06.897      ],
00:12:06.897      "product_name": "NVMe disk",
00:12:06.897      "block_size": 512,
00:12:06.897      "num_blocks": 732585168,
00:12:06.897      "uuid": "9592c0a1-5520-4ec4-96af-f86e44990254",
00:12:06.897      "numa_id": 1,
00:12:06.897      "assigned_rate_limits": {
00:12:06.897        "rw_ios_per_sec": 0,
00:12:06.897        "rw_mbytes_per_sec": 0,
00:12:06.897        "r_mbytes_per_sec": 0,
00:12:06.897        "w_mbytes_per_sec": 0
00:12:06.897      },
00:12:06.897      "claimed": false,
00:12:06.897      "zoned": false,
00:12:06.897      "supported_io_types": {
00:12:06.897        "read": true,
00:12:06.897        "write": true,
00:12:06.897        "unmap": true,
00:12:06.897        "flush": true,
00:12:06.897        "reset": true,
00:12:06.897        "nvme_admin": true,
00:12:06.897        "nvme_io": true,
00:12:06.897        "nvme_io_md": false,
00:12:06.897        "write_zeroes": true,
00:12:06.897        "zcopy": false,
00:12:06.897        "get_zone_info": false,
00:12:06.897        "zone_management": false,
00:12:06.897        "zone_append": false,
00:12:06.897        "compare": false,
00:12:06.897        "compare_and_write": false,
00:12:06.897        "abort": true,
00:12:06.897        "seek_hole": false,
00:12:06.897        "seek_data": false,
00:12:06.897        "copy": false,
00:12:06.897        "nvme_iov_md": false
00:12:06.897      },
00:12:06.897      "driver_specific": {
00:12:06.897        "nvme": [
00:12:06.897          {
00:12:06.897            "pci_address": "0000:af:00.0",
00:12:06.897            "trid": {
00:12:06.897              "trtype": "PCIe",
00:12:06.897              "traddr": "0000:af:00.0"
00:12:06.897            },
00:12:06.897            "ctrlr_data": {
00:12:06.897              "cntlid": 0,
00:12:06.897              "vendor_id": "0x8086",
00:12:06.897              "model_number": "INTEL SSDPED1K375GA",
00:12:06.897              "serial_number": "PHKS7481000F375AGN",
00:12:06.897              "firmware_revision": "E2010600",
00:12:06.897              "oacs": {
00:12:06.897                "security": 1,
00:12:06.897                "format": 1,
00:12:06.897                "firmware": 1,
00:12:06.897                "ns_manage": 0
00:12:06.897              },
00:12:06.897              "multi_ctrlr": false,
00:12:06.897              "ana_reporting": false
00:12:06.897            },
00:12:06.897            "vs": {
00:12:06.897              "nvme_version": "1.0"
00:12:06.897            },
00:12:06.897            "ns_data": {
00:12:06.897              "id": 1,
00:12:06.897              "can_share": false
00:12:06.897            },
00:12:06.897            "security": {
00:12:06.897              "opal": true
00:12:06.897            }
00:12:06.897          }
00:12:06.897        ],
00:12:06.897        "mp_policy": "active_passive"
00:12:06.897      }
00:12:06.897    },
00:12:06.897    {
00:12:06.897      "name": "Nvme2n1",
00:12:06.897      "aliases": [
00:12:06.897        "7c00a828-63f7-4062-9701-c601edaee97d"
00:12:06.897      ],
00:12:06.897      "product_name": "NVMe disk",
00:12:06.898      "block_size": 512,
00:12:06.898      "num_blocks": 732585168,
00:12:06.898      "uuid": "7c00a828-63f7-4062-9701-c601edaee97d",
00:12:06.898      "numa_id": 1,
00:12:06.898      "assigned_rate_limits": {
00:12:06.898        "rw_ios_per_sec": 0,
00:12:06.898        "rw_mbytes_per_sec": 0,
00:12:06.898        "r_mbytes_per_sec": 0,
00:12:06.898        "w_mbytes_per_sec": 0
00:12:06.898      },
00:12:06.898      "claimed": false,
00:12:06.898      "zoned": false,
00:12:06.898      "supported_io_types": {
00:12:06.898        "read": true,
00:12:06.898        "write": true,
00:12:06.898        "unmap": true,
00:12:06.898        "flush": true,
00:12:06.898        "reset": true,
00:12:06.898        "nvme_admin": true,
00:12:06.898        "nvme_io": true,
00:12:06.898        "nvme_io_md": false,
00:12:06.898        "write_zeroes": true,
00:12:06.898        "zcopy": false,
00:12:06.898        "get_zone_info": false,
00:12:06.898        "zone_management": false,
00:12:06.898        "zone_append": false,
00:12:06.898        "compare": false,
00:12:06.898        "compare_and_write": false,
00:12:06.898        "abort": true,
00:12:06.898        "seek_hole": false,
00:12:06.898        "seek_data": false,
00:12:06.898        "copy": false,
00:12:06.898        "nvme_iov_md": false
00:12:06.898      },
00:12:06.898      "driver_specific": {
00:12:06.898        "nvme": [
00:12:06.898          {
00:12:06.898            "pci_address": "0000:b0:00.0",
00:12:06.898            "trid": {
00:12:06.898              "trtype": "PCIe",
00:12:06.898              "traddr": "0000:b0:00.0"
00:12:06.898            },
00:12:06.898            "ctrlr_data": {
00:12:06.898              "cntlid": 0,
00:12:06.898              "vendor_id": "0x8086",
00:12:06.898              "model_number": "INTEL SSDPED1K375GA",
00:12:06.898              "serial_number": "PHKS7482004A375AGN",
00:12:06.898              "firmware_revision": "E2010600",
00:12:06.898              "oacs": {
00:12:06.898                "security": 1,
00:12:06.898                "format": 1,
00:12:06.898                "firmware": 1,
00:12:06.898                "ns_manage": 0
00:12:06.898              },
00:12:06.898              "multi_ctrlr": false,
00:12:06.898              "ana_reporting": false
00:12:06.898            },
00:12:06.898            "vs": {
00:12:06.898              "nvme_version": "1.0"
00:12:06.898            },
00:12:06.898            "ns_data": {
00:12:06.898              "id": 1,
00:12:06.898              "can_share": false
00:12:06.898            },
00:12:06.898            "security": {
00:12:06.898              "opal": true
00:12:06.898            }
00:12:06.898          }
00:12:06.898        ],
00:12:06.898        "mp_policy": "active_passive"
00:12:06.898      }
00:12:06.898    },
00:12:06.898    {
00:12:06.898      "name": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:06.898      "aliases": [
00:12:06.898        "lvs_0/lbd_vm_0"
00:12:06.898      ],
00:12:06.898      "product_name": "Logical Volume",
00:12:06.898      "block_size": 512,
00:12:06.898      "num_blocks": 3747078144,
00:12:06.898      "uuid": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:06.898      "assigned_rate_limits": {
00:12:06.898        "rw_ios_per_sec": 0,
00:12:06.898        "rw_mbytes_per_sec": 0,
00:12:06.898        "r_mbytes_per_sec": 0,
00:12:06.898        "w_mbytes_per_sec": 0
00:12:06.898      },
00:12:06.898      "claimed": false,
00:12:06.898      "zoned": false,
00:12:06.898      "supported_io_types": {
00:12:06.898        "read": true,
00:12:06.898        "write": true,
00:12:06.898        "unmap": true,
00:12:06.898        "flush": false,
00:12:06.898        "reset": true,
00:12:06.898        "nvme_admin": false,
00:12:06.898        "nvme_io": false,
00:12:06.898        "nvme_io_md": false,
00:12:06.898        "write_zeroes": true,
00:12:06.898        "zcopy": false,
00:12:06.898        "get_zone_info": false,
00:12:06.898        "zone_management": false,
00:12:06.898        "zone_append": false,
00:12:06.898        "compare": false,
00:12:06.898        "compare_and_write": false,
00:12:06.898        "abort": false,
00:12:06.898        "seek_hole": true,
00:12:06.898        "seek_data": true,
00:12:06.898        "copy": false,
00:12:06.898        "nvme_iov_md": false
00:12:06.898      },
00:12:06.898      "driver_specific": {
00:12:06.898        "lvol": {
00:12:06.898          "lvol_store_uuid": "b230a30c-0e18-4157-b337-40b03d91a0e1",
00:12:06.898          "base_bdev": "Nvme0n1",
00:12:06.898          "thin_provision": false,
00:12:06.898          "num_allocated_clusters": 457407,
00:12:06.898          "snapshot": false,
00:12:06.898          "clone": false,
00:12:06.898          "esnap_clone": false
00:12:06.898        }
00:12:06.898      }
00:12:06.898    }
00:12:06.898  ]
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i = 0 ))
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i < vm_count ))
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@117 -- # vm=vm_0
00:12:06.898    10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@121 -- # jq -r 'map(select(.aliases[] | contains("vm_0")) |             .aliases[]) | join(" ")'
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@121 -- # bdevs=lvs_0/lbd_vm_0
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@122 -- # bdevs=($bdevs)
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@124 -- # setup_cmd='vm_setup --disk-type=spdk_vhost_blk --force=0'
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@125 -- # setup_cmd+=' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2'
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@128 -- # mask_arg=("--cpumask" "$spdk_mask")
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@130 -- # [[ spdk_vhost_blk == \s\p\d\k\_\v\h\o\s\t\_\s\c\s\i ]]
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@136 -- # [[ spdk_vhost_blk == \s\p\d\k\_\v\h\o\s\t\_\b\l\k ]]
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@137 -- # disk=
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( j = 0 ))
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( j < 1 ))
00:12:06.898   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@139 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_create_blk_controller naa.0.0 lvs_0/lbd_vm_0 --cpumask '[2,3,4,5]'
00:12:07.158  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vhost-user server: socket created, fd: 343
00:12:07.158  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) binding succeeded
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@140 -- # disk+=0:
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( j++ ))
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( j < 1 ))
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@142 -- # disk=0
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@143 -- # setup_cmd+=' --disks=0'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@146 -- # vm_setup --disk-type=spdk_vhost_blk --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=0
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@518 -- # xtrace_disable
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:12:07.158  INFO: Creating new VM in /root/vhost_test/vms/0
00:12:07.158  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:07.158  INFO: TASK MASK: 0,1
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@671 -- # local node_num=0
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:07.158  INFO: NUMA NODE: 0
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@701 -- # IFS=,
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@702 -- # disk_type=spdk_vhost_blk
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@704 -- # case $disk_type in
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@732 -- # notice 'using socket /root/vhost_test/vhost/0/naa.0.0'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vhost/0/naa.0.0'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vhost/0/naa.0.0'
00:12:07.158  INFO: using socket /root/vhost_test/vhost/0/naa.0.0
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@733 -- # cmd+=(-chardev "socket,id=char_$disk,path=$vhost_dir/naa.$disk.$vm_num")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@734 -- # cmd+=(-device "vhost-user-blk-pci,num-queues=$queue_number,chardev=char_$disk")
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@735 -- # [[ 0 == '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@740 -- # false
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@785 -- # (( 0 ))
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:12:07.158  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@787 -- # cat
00:12:07.158    10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 0,1 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -chardev socket,id=char_0,path=/root/vhost_test/vhost/0/naa.0.0 -device vhost-user-blk-pci,num-queues=2,chardev=char_0
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@827 -- # echo 10000
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@828 -- # echo 10001
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@829 -- # echo 10002
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@834 -- # echo 10004
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@835 -- # echo 100
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:07.158   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@147 -- # used_vms+=' 0'
00:12:07.159   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i++ ))
00:12:07.159   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@138 -- # (( i < vm_count ))
00:12:07.159   10:39:56 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@150 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vhost_get_controllers
00:12:07.418  [
00:12:07.418    {
00:12:07.418      "ctrlr": "naa.0.0",
00:12:07.418      "cpumask": "0x3c",
00:12:07.418      "delay_base_us": 0,
00:12:07.418      "iops_threshold": 60000,
00:12:07.418      "socket": "/root/vhost_test/vhost/0/naa.0.0",
00:12:07.418      "sessions": [],
00:12:07.418      "backend_specific": {
00:12:07.418        "block": {
00:12:07.418          "readonly": false,
00:12:07.418          "bdev": "60507f04-c1ef-46ba-aad8-1a3324c47e26",
00:12:07.418          "transport": "vhost_user_blk"
00:12:07.418        }
00:12:07.418      }
00:12:07.418    }
00:12:07.418  ]
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@153 -- # vm_run 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@843 -- # local run_all=false
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@844 -- # local vms_to_run=
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@856 -- # false
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@859 -- # shift 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@860 -- # for vm in "$@"
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@871 -- # vm_is_running 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@373 -- # return 1
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:12:07.418  INFO: running /root/vhost_test/vms/0/run.sh
00:12:07.418   10:39:57 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:12:07.418  Running VM in /root/vhost_test/vms/0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new vhost user connection is 76
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device, handle is 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_FEATURES
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Vhost-user protocol features: 0x11ebf
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_QUEUE_NUM
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_OWNER
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_FEATURES
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:0 file:347
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ERR
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:1 file:348
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ERR
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_CONFIG
00:12:07.986  Waiting for QEMU pid file
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_FEATURES
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Virtio features: 0x140000046
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x00000008):
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 1
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_INFLIGHT_FD
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd num_queues: 2
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd queue_size: 128
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_size: 4224
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_offset: 0
00:12:07.986  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight fd: 349
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_INFLIGHT_FD
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_size: 4224
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_offset: 0
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd num_queues: 2
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd queue_size: 128
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd fd: 350
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd pervq_inflight_size: 2112
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:0 file:349
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:1 file:347
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_FEATURES
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Virtio features: 0x140000046
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_MEM_TABLE
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) guest memory region size: 0x40000000
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 guest physical addr: 0x0
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 guest virtual  addr: 0x7fd28be00000
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 host  virtual  addr: 0x7f3653600000
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap addr : 0x7f3653600000
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap size : 0x40000000
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap align: 0x200000
00:12:07.987  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	 mmap off  : 0x0
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:0 file:351
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 0
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 1
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x0000000f):
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 0
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 1
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 1
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 1
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 1
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:12:08.246  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:12:09.183  === qemu.log ===
00:12:09.183  === qemu.log ===
00:12:09.183   10:39:58 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@154 -- # vm_wait_for_boot 300 0
00:12:09.183   10:39:58 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@913 -- # assert_number 300
00:12:09.183   10:39:58 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:12:09.183   10:39:58 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@281 -- # return 0
00:12:09.183   10:39:58 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@915 -- # xtrace_disable
00:12:09.183   10:39:58 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:12:09.183  INFO: Waiting for VMs to boot
00:12:09.183  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x00000000):
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:0 file:1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_FEATURES
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Virtio features: 0x150007446
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x00000008):
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_INFLIGHT_FD
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd num_queues: 2
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) get_inflight_fd queue_size: 128
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_size: 4224
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight mmap_offset: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) send inflight fd: 349
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_INFLIGHT_FD
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_size: 4224
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd mmap_offset: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd num_queues: 2
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd queue_size: 128
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd fd: 350
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set_inflight_fd pervq_inflight_size: 2112
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:0 file:349
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_CALL
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring call idx:1 file:351
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_FEATURES
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) negotiated Virtio features: 0x150007446
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_MEM_TABLE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) memory regions not changed
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:0 file:347
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_NUM
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_BASE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ADDR
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_KICK
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring kick idx:1 file:353
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 1 to qp idx: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_STATUS
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x0000000f):
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 1
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:12:21.405  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) virtio is now ready for processing.
00:17:19.739  ...WARN: timeout waiting for machines to boot
00:17:19.739  WARN: ================
00:17:19.739  WARN: QEMU LOG:
00:17:19.739  WARN: VM LOG:
00:17:19.739  [    0.000000] Linux version 6.5.10-200.fc38.x86_64 (mockbuild@cbf61e0e869d4c6d90ab7044b9ed9a96) (gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4), GNU ld version 2.39-15.fc38) #1 SMP PREEMPT_DYNAMIC Thu Nov  2 19:59:55 UTC 2023
00:17:19.739  [    0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.5.10-200.fc38.x86_64 root=UUID=a280b604-6023-4ba5-bb9e-80d612f84b0d ro rootflags=subvol=root selinux=0 apparmor=0 net.ifnames=0 console=ttyS0 scsi_mod.use_blk_mq=1 console=ttyS0
00:17:19.739  [    0.000000] BIOS-provided physical RAM map:
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000003ffdcfff] usable
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x000000003ffdd000-0x000000003fffffff] reserved
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
00:17:19.739  [    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
00:17:19.740  [    0.000000] NX (Execute Disable) protection: active
00:17:19.740  [    0.000000] SMBIOS 2.8 present.
00:17:19.740  [    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
00:17:19.740  [    0.000000] Hypervisor detected: KVM
00:17:19.740  [    0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00
00:17:19.740  [    0.000001] kvm-clock: using sched offset of 11232716512 cycles
00:17:19.740  [    0.000003] clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
00:17:19.740  [    0.000005] tsc: Detected 2294.598 MHz processor
00:17:19.740  [    0.000815] last_pfn = 0x3ffdd max_arch_pfn = 0x400000000
00:17:19.740  [    0.000848] MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
00:17:19.740  [    0.000851] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
00:17:19.740  [    0.008438] found SMP MP-table at [mem 0x000f5b90-0x000f5b9f]
00:17:19.740  [    0.008460] Using GB pages for direct mapping
00:17:19.740  [    0.008527] RAMDISK: [mem 0x341cc000-0x360ddfff]
00:17:19.740  [    0.008530] ACPI: Early table checksum verification disabled
00:17:19.740  [    0.008533] ACPI: RSDP 0x00000000000F59B0 000014 (v00 BOCHS )
00:17:19.740  [    0.008537] ACPI: RSDT 0x000000003FFE1C1F 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008542] ACPI: FACP 0x000000003FFE1A03 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008547] ACPI: DSDT 0x000000003FFE0040 0019C3 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008551] ACPI: FACS 0x000000003FFE0000 000040
00:17:19.740  [    0.008554] ACPI: APIC 0x000000003FFE1A77 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008557] ACPI: HPET 0x000000003FFE1AF7 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008560] ACPI: SRAT 0x000000003FFE1B2F 0000C8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008563] ACPI: WAET 0x000000003FFE1BF7 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
00:17:19.740  [    0.008565] ACPI: Reserving FACP table memory at [mem 0x3ffe1a03-0x3ffe1a76]
00:17:19.740  [    0.008567] ACPI: Reserving DSDT table memory at [mem 0x3ffe0040-0x3ffe1a02]
00:17:19.740  [    0.008567] ACPI: Reserving FACS table memory at [mem 0x3ffe0000-0x3ffe003f]
00:17:19.740  [    0.008568] ACPI: Reserving APIC table memory at [mem 0x3ffe1a77-0x3ffe1af6]
00:17:19.740  [    0.008569] ACPI: Reserving HPET table memory at [mem 0x3ffe1af7-0x3ffe1b2e]
00:17:19.740  [    0.008569] ACPI: Reserving SRAT table memory at [mem 0x3ffe1b2f-0x3ffe1bf6]
00:17:19.740  [    0.008570] ACPI: Reserving WAET table memory at [mem 0x3ffe1bf7-0x3ffe1c1e]
00:17:19.740  [    0.008625] SRAT: PXM 0 -> APIC 0x00 -> Node 0
00:17:19.740  [    0.008627] SRAT: PXM 0 -> APIC 0x01 -> Node 0
00:17:19.740  [    0.008630] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
00:17:19.740  [    0.008632] ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x3fffffff]
00:17:19.740  [    0.008638] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x3ffdcfff] -> [mem 0x00000000-0x3ffdcfff]
00:17:19.740  [    0.008652] NODE_DATA(0) allocated [mem 0x3ffb2000-0x3ffdcfff]
00:17:19.740  [    0.009144] Zone ranges:
00:17:19.740  [    0.009145]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
00:17:19.740  [    0.009148]   DMA32    [mem 0x0000000001000000-0x000000003ffdcfff]
00:17:19.740  [    0.009150]   Normal   empty
00:17:19.740  [    0.009152]   Device   empty
00:17:19.740  [    0.009153] Movable zone start for each node
00:17:19.740  [    0.009157] Early memory node ranges
00:17:19.740  [    0.009158]   node   0: [mem 0x0000000000001000-0x000000000009efff]
00:17:19.740  [    0.009160]   node   0: [mem 0x0000000000100000-0x000000003ffdcfff]
00:17:19.740  [    0.009162] Initmem setup node 0 [mem 0x0000000000001000-0x000000003ffdcfff]
00:17:19.740  [    0.009167] On node 0, zone DMA: 1 pages in unavailable ranges
00:17:19.740  [    0.009194] On node 0, zone DMA: 97 pages in unavailable ranges
00:17:19.740  [    0.010665] On node 0, zone DMA32: 35 pages in unavailable ranges
00:17:19.740  [    0.011071] ACPI: PM-Timer IO Port: 0x608
00:17:19.740  [    0.011082] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
00:17:19.740  [    0.011124] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
00:17:19.740  [    0.011127] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
00:17:19.740  [    0.011129] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
00:17:19.740  [    0.011130] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
00:17:19.740  [    0.011131] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
00:17:19.740  [    0.011132] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
00:17:19.740  [    0.011136] ACPI: Using ACPI (MADT) for SMP configuration information
00:17:19.740  [    0.011138] ACPI: HPET id: 0x8086a201 base: 0xfed00000
00:17:19.740  [    0.011141] TSC deadline timer available
00:17:19.740  [    0.011142] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
00:17:19.740  [    0.011158] kvm-guest: KVM setup pv remote TLB flush
00:17:19.740  [    0.011162] kvm-guest: setup PV sched yield
00:17:19.740  [    0.011167] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
00:17:19.740  [    0.011169] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
00:17:19.740  [    0.011170] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
00:17:19.740  [    0.011171] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
00:17:19.740  [    0.011172] [mem 0x40000000-0xfeffbfff] available for PCI devices
00:17:19.740  [    0.011173] Booting paravirtualized kernel on KVM
00:17:19.740  [    0.011175] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
00:17:19.740  [    0.016699] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
00:17:19.740  [    0.016929] percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u1048576
00:17:19.740  [    0.016995] kvm-guest: PV spinlocks enabled
00:17:19.740  [    0.016997] PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
00:17:19.740  [    0.017000] Kernel command line: BOOT_IMAGE=/vmlinuz-6.5.10-200.fc38.x86_64 root=UUID=a280b604-6023-4ba5-bb9e-80d612f84b0d ro rootflags=subvol=root selinux=0 apparmor=0 net.ifnames=0 console=ttyS0 scsi_mod.use_blk_mq=1 console=ttyS0
00:17:19.740  [    0.017104] Unknown kernel command line parameters "BOOT_IMAGE=/vmlinuz-6.5.10-200.fc38.x86_64 apparmor=0", will be passed to user space.
00:17:19.740  [    0.017134] random: crng init done
00:17:19.740  [    0.017228] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
00:17:19.740  [    0.017277] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
00:17:19.740  [    0.017330] Fallback order for Node 0: 0 
00:17:19.740  [    0.017333] Built 1 zonelists, mobility grouping on.  Total pages: 257757
00:17:19.740  [    0.017335] Policy zone: DMA32
00:17:19.740  [    0.017527] mem auto-init: stack:all(zero), heap alloc:off, heap free:off
00:17:19.740  [    0.019249] Memory: 929000K/1048044K available (18432K kernel code, 3266K rwdata, 14472K rodata, 4516K init, 17380K bss, 118784K reserved, 0K cma-reserved)
00:17:19.740  [    0.019430] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
00:17:19.740  [    0.019440] Kernel/User page tables isolation: enabled
00:17:19.740  [    0.019466] ftrace: allocating 53609 entries in 210 pages
00:17:19.740  [    0.028830] ftrace: allocated 210 pages with 4 groups
00:17:19.740  [    0.029606] Dynamic Preempt: voluntary
00:17:19.740  [    0.029632] rcu: Preemptible hierarchical RCU implementation.
00:17:19.740  [    0.029632] rcu: 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=2.
00:17:19.740  [    0.029634] 	Trampoline variant of Tasks RCU enabled.
00:17:19.740  [    0.029634] 	Rude variant of Tasks RCU enabled.
00:17:19.740  [    0.029635] 	Tracing variant of Tasks RCU enabled.
00:17:19.740  [    0.029635] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
00:17:19.740  [    0.029636] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
00:17:19.740  [    0.032257] NR_IRQS: 524544, nr_irqs: 440, preallocated irqs: 16
00:17:19.740  [    0.032452] rcu: srcu_init: Setting srcu_struct sizes based on contention.
00:17:19.740  [    0.032549] kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
00:17:19.740  [    0.044654] Console: colour VGA+ 80x25
00:17:19.740  [    0.044705] printk: console [ttyS0] enabled
00:17:19.740  [    0.145349] ACPI: Core revision 20230331
00:17:19.740  [    0.146015] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
00:17:19.740  [    0.147347] APIC: Switch to symmetric I/O mode setup
00:17:19.740  [    0.148291] x2apic enabled
00:17:19.740  [    0.148961] Switched APIC routing to physical x2apic.
00:17:19.740  [    0.149678] kvm-guest: setup PV IPIs
00:17:19.740  [    0.151250] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
00:17:19.740  [    0.152125] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x211346443c0, max_idle_ns: 440795268033 ns
00:17:19.740  [    0.153616] Calibrating delay loop (skipped) preset value.. 4589.19 BogoMIPS (lpj=2294598)
00:17:19.740  [    0.154720] x86/cpu: User Mode Instruction Prevention (UMIP) activated
00:17:19.740  [    0.155679] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
00:17:19.740  [    0.156615] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
00:17:19.740  [    0.157619] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
00:17:19.740  [    0.158618] Spectre V2 : Mitigation: IBRS
00:17:19.740  [    0.159178] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
00:17:19.740  [    0.159615] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
00:17:19.740  [    0.160615] RETBleed: Mitigation: IBRS
00:17:19.740  [    0.161165] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
00:17:19.740  [    0.161616] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
00:17:19.740  [    0.162623] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
00:17:19.740  [    0.163615] TAA: Vulnerable: Clear CPU buffers attempted, no microcode
00:17:19.740  [    0.164615] MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
00:17:19.740  [    0.165615] GDS: Unknown: Dependent on hypervisor status
00:17:19.740  [    0.166643] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
00:17:19.740  [    0.167615] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
00:17:19.740  [    0.168615] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
00:17:19.740  [    0.169615] x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
00:17:19.740  [    0.170615] x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
00:17:19.740  [    0.171615] x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
00:17:19.740  [    0.172615] x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
00:17:19.740  [    0.173615] x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
00:17:19.740  [    0.174615] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
00:17:19.740  [    0.175615] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
00:17:19.740  [    0.176615] x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
00:17:19.741  [    0.177459] x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
00:17:19.741  [    0.177615] x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
00:17:19.741  [    0.178615] x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
00:17:19.741  [    0.179615] x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
00:17:19.741  [    0.180615] x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
00:17:19.741  [    0.181615] x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
00:17:19.741  [    0.210445] Freeing SMP alternatives memory: 48K
00:17:19.741  [    0.210616] pid_max: default: 32768 minimum: 301
00:17:19.741  [    0.211652] LSM: initializing lsm=lockdown,capability,yama,bpf,landlock,integrity
00:17:19.741  [    0.212636] Yama: becoming mindful.
00:17:19.741  [    0.213123] LSM support for eBPF active
00:17:19.741  [    0.213616] landlock: Up and running.
00:17:19.741  [    0.214143] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes, linear)
00:17:19.741  [    0.214618] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes, linear)
00:17:19.741  [    0.215889] smpboot: CPU0: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz (family: 0x6, model: 0x55, stepping: 0x4)
00:17:19.741  [    0.216786] RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
00:17:19.741  [    0.217633] RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
00:17:19.741  [    0.218632] RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1.
00:17:19.741  [    0.219632] Performance Events: Skylake events, full-width counters, Intel PMU driver.
00:17:19.741  [    0.220618] ... version:                2
00:17:19.741  [    0.221169] ... bit width:              48
00:17:19.741  [    0.221616] ... generic registers:      4
00:17:19.741  [    0.222166] ... value mask:             0000ffffffffffff
00:17:19.741  [    0.222616] ... max period:             00007fffffffffff
00:17:19.741  [    0.223343] ... fixed-purpose events:   3
00:17:19.741  [    0.223616] ... event mask:             000000070000000f
00:17:19.741  [    0.224465] signal: max sigframe size: 3632
00:17:19.741  [    0.224645] rcu: Hierarchical SRCU implementation.
00:17:19.741  [    0.225307] rcu: 	Max phase no-delay instances is 400.
00:17:19.741  [    0.225957] smp: Bringing up secondary CPUs ...
00:17:19.741  [    0.226722] smpboot: x86: Booting SMP configuration:
00:17:19.741  [    0.227415] .... node  #0, CPUs:      #1
00:17:19.741  [    0.227658] smp: Brought up 1 node, 2 CPUs
00:17:19.741  [    0.228617] smpboot: Max logical packages: 1
00:17:19.741  [    0.229199] smpboot: Total of 2 processors activated (9178.39 BogoMIPS)
00:17:19.741  [    0.229836] devtmpfs: initialized
00:17:19.741  [    0.230121] x86/mm: Memory block size: 128MB
00:17:19.741  [    0.230878] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
00:17:19.741  [    0.231619] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
00:17:19.741  [    0.232594] pinctrl core: initialized pinctrl subsystem
00:17:19.741  [    0.232753] PM: RTC time: 09:40:08, date: 2024-11-19
00:17:19.741  [    0.233850] NET: Registered PF_NETLINK/PF_ROUTE protocol family
00:17:19.741  [    0.234761] DMA: preallocated 128 KiB GFP_KERNEL pool for atomic allocations
00:17:19.741  [    0.235619] DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
00:17:19.741  [    0.236618] DMA: preallocated 128 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
00:17:19.741  [    0.237630] audit: initializing netlink subsys (disabled)
00:17:19.741  [    0.238390] audit: type=2000 audit(1732009208.928:1): state=initialized audit_enabled=0 res=1
00:17:19.741  [    0.238390] thermal_sys: Registered thermal governor 'fair_share'
00:17:19.741  [    0.238617] thermal_sys: Registered thermal governor 'bang_bang'
00:17:19.741  [    0.239447] thermal_sys: Registered thermal governor 'step_wise'
00:17:19.741  [    0.239616] thermal_sys: Registered thermal governor 'user_space'
00:17:19.741  [    0.240442] cpuidle: using governor menu
00:17:19.741  [    0.241281] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
00:17:19.741  [    0.241730] PCI: Using configuration type 1 for base access
00:17:19.741  [    0.242707] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
00:17:19.741  [    0.270301] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
00:17:19.741  [    0.270617] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
00:17:19.741  [    0.271513] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
00:17:19.741  [    0.272616] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
00:17:19.741  [    0.273668] cryptd: max_cpu_qlen set to 1000
00:17:19.741  [    0.274654] raid6: skipped pq benchmark and selected avx512x4
00:17:19.741  [    0.275618] raid6: using avx512x2 recovery algorithm
00:17:19.741  [    0.276349] ACPI: Added _OSI(Module Device)
00:17:19.741  [    0.276616] ACPI: Added _OSI(Processor Device)
00:17:19.741  [    0.277225] ACPI: Added _OSI(3.0 _SCP Extensions)
00:17:19.741  [    0.277616] ACPI: Added _OSI(Processor Aggregator Device)
00:17:19.741  [    0.278962] ACPI: 1 ACPI AML tables successfully acquired and loaded
00:17:19.741  [    0.280704] ACPI: Interpreter enabled
00:17:19.741  [    0.281223] ACPI: PM: (supports S0 S3 S4 S5)
00:17:19.741  [    0.282616] ACPI: Using IOAPIC for interrupt routing
00:17:19.741  [    0.283407] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
00:17:19.741  [    0.284617] PCI: Using E820 reservations for host bridge windows
00:17:19.741  [    0.285525] ACPI: Enabled 2 GPEs in block 00 to 0F
00:17:19.741  [    0.287501] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
00:17:19.741  [    0.288620] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI EDR HPX-Type3]
00:17:19.741  [    0.289617] acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
00:17:19.741  [    0.290623] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
00:17:19.741  [    0.292925] acpiphp: Slot [3] registered
00:17:19.741  [    0.293484] acpiphp: Slot [4] registered
00:17:19.741  [    0.293631] acpiphp: Slot [5] registered
00:17:19.741  [    0.294186] acpiphp: Slot [6] registered
00:17:19.741  [    0.294630] acpiphp: Slot [7] registered
00:17:19.741  [    0.295178] acpiphp: Slot [8] registered
00:17:19.741  [    0.296630] acpiphp: Slot [9] registered
00:17:19.741  [    0.297181] acpiphp: Slot [10] registered
00:17:19.741  [    0.297629] acpiphp: Slot [11] registered
00:17:19.741  [    0.298192] acpiphp: Slot [12] registered
00:17:19.741  [    0.298631] acpiphp: Slot [13] registered
00:17:19.741  [    0.299194] acpiphp: Slot [14] registered
00:17:19.741  [    0.299629] acpiphp: Slot [15] registered
00:17:19.741  [    0.300191] acpiphp: Slot [16] registered
00:17:19.741  [    0.300635] acpiphp: Slot [17] registered
00:17:19.741  [    0.301194] acpiphp: Slot [18] registered
00:17:19.741  [    0.301629] acpiphp: Slot [19] registered
00:17:19.741  [    0.302189] acpiphp: Slot [20] registered
00:17:19.741  [    0.302629] acpiphp: Slot [21] registered
00:17:19.741  [    0.303189] acpiphp: Slot [22] registered
00:17:19.741  [    0.303629] acpiphp: Slot [23] registered
00:17:19.741  [    0.304198] acpiphp: Slot [24] registered
00:17:19.741  [    0.304630] acpiphp: Slot [25] registered
00:17:19.741  [    0.305196] acpiphp: Slot [26] registered
00:17:19.741  [    0.306629] acpiphp: Slot [27] registered
00:17:19.741  [    0.307192] acpiphp: Slot [28] registered
00:17:19.741  [    0.307629] acpiphp: Slot [29] registered
00:17:19.741  [    0.308191] acpiphp: Slot [30] registered
00:17:19.741  [    0.308630] acpiphp: Slot [31] registered
00:17:19.741  [    0.309190] PCI host bridge to bus 0000:00
00:17:19.741  [    0.309616] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
00:17:19.741  [    0.310533] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
00:17:19.741  [    0.311616] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
00:17:19.741  [    0.312616] pci_bus 0000:00: root bus resource [mem 0x40000000-0xfebfffff window]
00:17:19.741  [    0.313616] pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
00:17:19.741  [    0.314616] pci_bus 0000:00: root bus resource [bus 00-ff]
00:17:19.741  [    0.315401] pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
00:17:19.741  [    0.316023] pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
00:17:19.741  [    0.318175] pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
00:17:19.741  [    0.321094] pci 0000:00:01.1: reg 0x20: [io  0xc0c0-0xc0cf]
00:17:19.741  [    0.322599] pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
00:17:19.741  [    0.323618] pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
00:17:19.741  [    0.324511] pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
00:17:19.741  [    0.325616] pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
00:17:19.741  [    0.326680] pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
00:17:19.741  [    0.327920] pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
00:17:19.741  [    0.328624] pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
00:17:19.741  [    0.329793] pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
00:17:19.741  [    0.332643] pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref]
00:17:19.741  [    0.334665] pci 0000:00:02.0: reg 0x18: [mem 0xfebf0000-0xfebf0fff]
00:17:19.741  [    0.340215] pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
00:17:19.741  [    0.340728] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
00:17:19.741  [    0.342913] pci 0000:00:03.0: [8086:100e] type 00 class 0x020000
00:17:19.741  [    0.344238] pci 0000:00:03.0: reg 0x10: [mem 0xfebc0000-0xfebdffff]
00:17:19.741  [    0.345164] pci 0000:00:03.0: reg 0x14: [io  0xc080-0xc0bf]
00:17:19.741  [    0.349618] pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
00:17:19.741  [    0.351310] pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
00:17:19.741  [    0.353327] pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
00:17:19.741  [    0.354288] pci 0000:00:04.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff]
00:17:19.741  [    0.357618] pci 0000:00:04.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref]
00:17:19.741  [    0.362229] ACPI: PCI: Interrupt link LNKA configured for IRQ 10
00:17:19.741  [    0.362717] ACPI: PCI: Interrupt link LNKB configured for IRQ 10
00:17:19.741  [    0.363629] ACPI: PCI: Interrupt link LNKC configured for IRQ 11
00:17:19.741  [    0.364535] ACPI: PCI: Interrupt link LNKD configured for IRQ 11
00:17:19.741  [    0.365672] ACPI: PCI: Interrupt link LNKS configured for IRQ 9
00:17:19.741  [    0.367637] iommu: Default domain type: Translated
00:17:19.741  [    0.368290] iommu: DMA domain TLB invalidation policy: lazy mode
00:17:19.741  [    0.368576] SCSI subsystem initialized
00:17:19.742  [    0.369660] ACPI: bus type USB registered
00:17:19.742  [    0.370235] usbcore: registered new interface driver usbfs
00:17:19.742  [    0.370626] usbcore: registered new interface driver hub
00:17:19.742  [    0.371362] usbcore: registered new device driver usb
00:17:19.742  [    0.371648] pps_core: LinuxPPS API ver. 1 registered
00:17:19.742  [    0.372437] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
00:17:19.742  [    0.372619] PTP clock support registered
00:17:19.742  [    0.373641] EDAC MC: Ver: 3.0.0
00:17:19.742  [    0.374976] NetLabel: Initializing
00:17:19.742  [    0.375458] NetLabel:  domain hash size = 128
00:17:19.742  [    0.375458] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
00:17:19.742  [    0.376637] NetLabel:  unlabeled traffic allowed by default
00:17:19.742  [    0.377411] mctp: management component transport protocol core
00:17:19.742  [    0.377618] NET: Registered PF_MCTP protocol family
00:17:19.742  [    0.378305] PCI: Using ACPI for IRQ routing
00:17:19.742  [    0.379744] pci 0000:00:02.0: vgaarb: setting as boot VGA device
00:17:19.742  [    0.380449] pci 0000:00:02.0: vgaarb: bridge control possible
00:17:19.742  [    0.380614] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
00:17:19.742  [    0.382617] vgaarb: loaded
00:17:19.742  [    0.383012] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
00:17:19.742  [    0.383617] hpet0: 3 comparators, 64-bit 100.000000 MHz counter
00:17:19.742  [    0.387656] clocksource: Switched to clocksource kvm-clock
00:17:19.742  [    0.402675] VFS: Disk quotas dquot_6.6.0
00:17:19.742  [    0.403253] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
00:17:19.742  [    0.404274] pnp: PnP ACPI init
00:17:19.742  [    0.404986] pnp: PnP ACPI: found 6 devices
00:17:19.742  [    0.411955] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
00:17:19.742  [    0.413231] NET: Registered PF_INET protocol family
00:17:19.742  [    0.413968] IP idents hash table entries: 16384 (order: 5, 131072 bytes, linear)
00:17:19.742  [    0.415806] tcp_listen_portaddr_hash hash table entries: 512 (order: 1, 8192 bytes, linear)
00:17:19.742  [    0.417153] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
00:17:19.742  [    0.418607] TCP established hash table entries: 8192 (order: 4, 65536 bytes, linear)
00:17:19.742  [    0.420054] TCP bind hash table entries: 8192 (order: 6, 262144 bytes, linear)
00:17:19.742  [    0.421445] TCP: Hash tables configured (established 8192 bind 8192)
00:17:19.742  [    0.422764] MPTCP token hash table entries: 1024 (order: 2, 24576 bytes, linear)
00:17:19.742  [    0.424181] UDP hash table entries: 512 (order: 2, 16384 bytes, linear)
00:17:19.742  [    0.425101] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes, linear)
00:17:19.742  [    0.426172] NET: Registered PF_UNIX/PF_LOCAL protocol family
00:17:19.742  [    0.427249] NET: Registered PF_XDP protocol family
00:17:19.742  [    0.428163] pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
00:17:19.742  [    0.429321] pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
00:17:19.742  [    0.430466] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
00:17:19.742  [    0.431735] pci_bus 0000:00: resource 7 [mem 0x40000000-0xfebfffff window]
00:17:19.742  [    0.433003] pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
00:17:19.742  [    0.434348] pci 0000:00:01.0: PIIX3: Enabling Passive Release
00:17:19.742  [    0.435426] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
00:17:19.742  [    0.436523] PCI: CLS 0 bytes, default 64
00:17:19.742  [    0.437146] Trying to unpack rootfs image as initramfs...
00:17:19.742  [    0.437168] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x211346443c0, max_idle_ns: 440795268033 ns
00:17:19.742  [    0.475642] Initialise system trusted keyrings
00:17:19.742  [    0.476609] Key type blacklist registered
00:17:19.742  [    0.477481] workingset: timestamp_bits=36 max_order=18 bucket_order=0
00:17:19.742  [    0.478816] zbud: loaded
00:17:19.742  [    0.479762] integrity: Platform Keyring initialized
00:17:19.742  [    0.480764] integrity: Machine keyring initialized
00:17:19.742  [    0.491040] NET: Registered PF_ALG protocol family
00:17:19.742  [    0.491714] xor: automatically using best checksumming function   avx       
00:17:19.742  [    0.492678] Key type asymmetric registered
00:17:19.742  [    0.493255] Asymmetric key parser 'x509' registered
00:17:19.742  [    0.727280] Freeing initrd memory: 31816K
00:17:19.742  [    0.732711] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 245)
00:17:19.742  [    0.733826] io scheduler mq-deadline registered
00:17:19.742  [    0.734464] io scheduler kyber registered
00:17:19.742  [    0.735022] io scheduler bfq registered
00:17:19.742  [    0.737230] atomic64_test: passed for x86-64 platform with CX8 and with SSE
00:17:19.742  [    0.738429] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
00:17:19.742  [    0.739451] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
00:17:19.742  [    0.740550] ACPI: button: Power Button [PWRF]
00:17:19.742  [    0.760229] ACPI: \_SB_.LNKD: Enabled at IRQ 11
00:17:19.742  [    0.763834] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
00:17:19.742  [    0.764895] 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
00:17:19.742  [    0.767464] Non-volatile memory driver v1.3
00:17:19.742  [    0.768063] Linux agpgart interface v0.103
00:17:19.742  [    0.768734] ACPI: bus type drm_connector registered
00:17:19.742  [    0.770445] scsi host0: ata_piix
00:17:19.742  [    0.771032] scsi host1: ata_piix
00:17:19.742  [    0.771518] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
00:17:19.742  [    0.772451] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
00:17:19.742  [    0.773656] usbcore: registered new interface driver usbserial_generic
00:17:19.742  [    0.775098] usbserial: USB Serial support registered for generic
00:17:19.742  [    0.776348] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
00:17:19.742  [    0.778830] serio: i8042 KBD port at 0x60,0x64 irq 1
00:17:19.742  [    0.779828] serio: i8042 AUX port at 0x60,0x64 irq 12
00:17:19.742  [    0.780944] mousedev: PS/2 mouse device common for all mice
00:17:19.742  [    0.782434] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
00:17:19.742  [    0.784209] rtc_cmos 00:05: RTC can wake from S4
00:17:19.742  [    0.785905] rtc_cmos 00:05: registered as rtc0
00:17:19.742  [    0.785968] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
00:17:19.742  [    0.786763] rtc_cmos 00:05: setting system clock to 2024-11-19T09:40:09 UTC (1732009209)
00:17:19.742  [    0.789734] rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
00:17:19.742  [    0.791297] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
00:17:19.742  [    0.793723] device-mapper: uevent: version 1.0.3
00:17:19.742  [    0.793737] input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
00:17:19.742  [    0.794714] device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
00:17:19.742  [    0.797716] intel_pstate: CPU model not supported
00:17:19.742  [    0.798740] hid: raw HID events driver (C) Jiri Kosina
00:17:19.742  [    0.799772] usbcore: registered new interface driver usbhid
00:17:19.742  [    0.800523] usbhid: USB HID core driver
00:17:19.742  [    0.801194] drop_monitor: Initializing network drop monitor service
00:17:19.742  [    0.815080] Initializing XFRM netlink socket
00:17:19.742  [    0.815751] NET: Registered PF_INET6 protocol family
00:17:19.742  [    0.820495] Segment Routing with IPv6
00:17:19.742  [    0.821018] RPL Segment Routing with IPv6
00:17:19.742  [    0.821576] In-situ OAM (IOAM) with IPv6
00:17:19.742  [    0.822137] mip6: Mobile IPv6
00:17:19.742  [    0.822576] NET: Registered PF_PACKET protocol family
00:17:19.742  [    0.823999] No MBM correction factor available
00:17:19.742  [    0.824905] IPI shorthand broadcast: enabled
00:17:19.742  [    0.825779] AVX2 version of gcm_enc/dec engaged.
00:17:19.742  [    0.826794] AES CTR mode by8 optimization enabled
00:17:19.742  [    0.830283] sched_clock: Marking stable (713001501, 117155553)->(844534595, -14377541)
00:17:19.742  [    0.831830] registered taskstats version 1
00:17:19.742  [    0.832717] Loading compiled-in X.509 certificates
00:17:19.742  [    0.843341] Loaded X.509 cert 'Fedora kernel signing key: 750cd07b9b0705f4e5ae31986278e30533229237'
00:17:19.742  [    0.847313] page_owner is disabled
00:17:19.742  [    0.847845] Key type .fscrypt registered
00:17:19.742  [    0.848394] Key type fscrypt-provisioning registered
00:17:19.742  [    0.849478] Btrfs loaded, zoned=yes, fsverity=yes
00:17:19.742  [    0.850134] Key type big_key registered
00:17:19.742  [    0.854241] Key type encrypted registered
00:17:19.742  [    0.854812] ima: No TPM chip found, activating TPM-bypass!
00:17:19.742  [    0.855570] Loading compiled-in module X.509 certificates
00:17:19.742  [    0.856731] Loaded X.509 cert 'Fedora kernel signing key: 750cd07b9b0705f4e5ae31986278e30533229237'
00:17:19.742  [    0.857952] ima: Allocated hash algorithm: sha256
00:17:19.742  [    0.858612] ima: No architecture policies found
00:17:19.742  [    0.859259] evm: Initialising EVM extended attributes:
00:17:19.742  [    0.859946] evm: security.selinux
00:17:19.742  [    0.860401] evm: security.SMACK64 (disabled)
00:17:19.742  [    0.860969] evm: security.SMACK64EXEC (disabled)
00:17:19.742  [    0.861604] evm: security.SMACK64TRANSMUTE (disabled)
00:17:19.742  [    0.862292] evm: security.SMACK64MMAP (disabled)
00:17:19.742  [    0.862917] evm: security.apparmor (disabled)
00:17:19.742  [    0.863513] evm: security.ima
00:17:19.742  [    0.863919] evm: security.capability
00:17:19.742  [    0.864413] evm: HMAC attrs: 0x1
00:17:19.742  [    0.919235] alg: No test for 842 (842-scomp)
00:17:19.742  [    0.920095] alg: No test for 842 (842-generic)
00:17:19.742  [    0.927300] ata2: found unknown device (class 0)
00:17:19.742  [    0.928619] ata2.00: ATA-7: QEMU HARDDISK, 2.5+, max UDMA/100
00:17:19.742  [    0.929451] ata2.00: 10485760 sectors, multi 16: LBA48 
00:17:19.742  [    0.930758] scsi 1:0:0:0: Direct-Access     ATA      QEMU HARDDISK    2.5+ PQ: 0 ANSI: 5
00:17:19.742  [    0.932296] sd 1:0:0:0: [sda] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
00:17:19.742  [    0.933511] sd 1:0:0:0: [sda] Write Protect is off
00:17:19.742  [    0.934290] sd 1:0:0:0: Attached scsi generic sg0 type 0
00:17:19.742  [    0.935165] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
00:17:19.742  [    0.936613] sd 1:0:0:0: [sda] Preferred minimum I/O size 512 bytes
00:17:19.742  [    0.940423]  sda: sda1 sda2 sda3 sda4 sda5
00:17:19.742  [    0.941213] sd 1:0:0:0: [sda] Attached SCSI disk
00:17:19.742  [    1.020093] PM:   Magic number: 0:545:677
00:17:19.742  [    1.027273] RAS: Correctable Errors collector initialized.
00:17:19.742  [    1.028473] clk: Disabling unused clocks
00:17:19.743  [    1.031253] Freeing unused decrypted memory: 2036K
00:17:19.743  [    1.033266] Freeing unused kernel image (initmem) memory: 4516K
00:17:19.743  [    1.034175] Write protecting the kernel read-only data: 34816k
00:17:19.743  [    1.035401] Freeing unused kernel image (rodata/data gap) memory: 1912K
00:17:19.743  [    1.079666] x86/mm: Checked W+X mappings: passed, no W+X pages found.
00:17:19.743  [    1.080325] x86/mm: Checking user space page tables
00:17:19.743  [    1.122888] x86/mm: Checked W+X mappings: passed, no W+X pages found.
00:17:19.743  [    1.123548] Run /init as init process
00:17:19.743  [    1.140929] systemd[1]: systemd 253.12-1.fc38 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
00:17:19.743  [    1.145579] systemd[1]: Detected virtualization kvm.
00:17:19.743  [    1.146395] systemd[1]: Detected architecture x86-64.
00:17:19.743  [    1.147113] systemd[1]: Running in initrd.
00:17:19.743  
00:17:19.743  Welcome to Fedora Linux 38 (Cloud Edition) dracut-059-4.fc38 (Initramfs)!
00:17:19.743  
00:17:19.743  [  !!  ] This OS version (Fedora Linux 38 (Cloud Edition) dracut-059-4.fc38 (Initramfs)) is past its end-of-support date (2024-05-14)
00:17:19.743  [    1.151407] systemd[1]: No hostname configured, using default hostname.
00:17:19.743  [    1.152419] systemd[1]: Hostname set to <localhost>.
00:17:19.743  [    1.153204] systemd[1]: Initializing machine ID from random generator.
00:17:19.743  [    1.223486] systemd[1]: Queued start job for default target initrd.target.
00:17:19.743  [    1.232534] systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
00:17:19.743  [  OK  ] Started systemd-ask-passwo…quests to Console Directory Watch.
00:17:19.743  [    1.234977] systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
00:17:19.743  [  OK  ] Reached target cryptsetup.…get - Local Encrypted Volumes.
00:17:19.743  [    1.237386] systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System.
00:17:19.743  [  OK  ] Reached target initrd-usr-…get - Initrd /usr File System.
00:17:19.743  [    1.239776] systemd[1]: Reached target local-fs.target - Local File Systems.
00:17:19.743  [  OK  ] Reached target local-fs.target - Local File Systems.
00:17:19.743  [    1.241984] systemd[1]: Reached target paths.target - Path Units.
00:17:19.743  [  OK  ] Reached target paths.target - Path Units.
00:17:19.743  [    1.243950] systemd[1]: Reached target slices.target - Slice Units.
00:17:19.743  [  OK  ] Reached target slices.target - Slice Units.
00:17:19.743  [    1.245428] systemd[1]: Reached target swap.target - Swaps.
00:17:19.743  [  OK  ] Reached target swap.target - Swaps.
00:17:19.743  [    1.247667] systemd[1]: Reached target timers.target - Timer Units.
00:17:19.743  [  OK  ] Reached target timers.target - Timer Units.
00:17:19.743  [    1.249363] systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
00:17:19.743  [  OK  ] Listening on systemd-journ…t - Journal Socket (/dev/log).
00:17:19.743  [    1.251358] systemd[1]: Listening on systemd-journald.socket - Journal Socket.
00:17:19.743  [  OK  ] Listening on systemd-journald.socket - Journal Socket.
00:17:19.743  [    1.253146] systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
00:17:19.743  [  OK  ] Listening on systemd-udevd….socket - udev Control Socket.
00:17:19.743  [    1.255032] systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
00:17:19.743  [  OK  ] Listening on systemd-udevd…l.socket - udev Kernel Socket.
00:17:19.743  [    1.256852] systemd[1]: Reached target sockets.target - Socket Units.
00:17:19.743  [  OK  ] Reached target sockets.target - Socket Units.
00:17:19.743  [    1.264300] systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
00:17:19.743           Starting kmod-static-nodes…ate List of Static Device Nodes...
00:17:19.743  [    1.266229] systemd[1]: memstrack.service - Memstrack Anylazing Service was skipped because no trigger condition checks were met.
00:17:19.743  [    1.269353] systemd[1]: Starting systemd-journald.service - Journal Service...
00:17:19.743           Starting systemd-journald.service - Journal Service...
00:17:19.743  [    1.273036] systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
00:17:19.743           Starting systemd-modules-l…rvice - Load Kernel Modules...
00:17:19.743  [    1.275554] systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console...
00:17:19.743           Starting systemd-vconsole-…ice - Setup Virtual Console...
00:17:19.743  [    1.277849] systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
00:17:19.743  [  OK  ] Finished kmod-static-nodes…reate List of Static Device Nodes.
00:17:19.743  [    1.281621] systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
00:17:19.743           Starting systemd-tmpfiles-…ate Static Device Nodes in /dev...
00:17:19.743  [    1.287952] systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
00:17:19.743  [    1.288948] systemd-journald[226]: Collecting audit messages is disabled.
00:17:19.743  [  OK  ] Finished systemd-modules-l…service - Load Kernel Modules.
00:17:19.743  [    1.346467] systemd[1]: Started systemd-journald.service - Journal Service.
00:17:19.743  [  OK  ] Started systemd-journald.service - Journal Service.
00:17:19.743  [  OK  ] Finished systemd-tmpfiles-…reate Static Device Nodes in /dev.
00:17:19.743           Starting systemd-sysctl.se…ce - Apply Kernel Variables...
00:17:19.743           Starting systemd-tmpfiles-… Volatile Files and Directories...
00:17:19.743  [  OK  ] Finished systemd-sysctl.service - Apply Kernel Variables.
00:17:19.743  [  OK  ] Finished systemd-tmpfiles-…te Volatile Files and Directories.
00:17:19.743  [  OK  ] Finished systemd-vconsole-…rvice - Setup Virtual Console.
00:17:19.743           Starting dracut-cmdline-as…r additional cmdline parameters...
00:17:19.743  [  OK  ] Finished dracut-cmdline-as…for additional cmdline parameters.
00:17:19.743           Starting dracut-cmdline.service - dracut cmdline hook...
00:17:19.743  [  OK  ] Finished dracut-cmdline.service - dracut cmdline hook.
00:17:19.743           Starting dracut-pre-udev.s…vice - dracut pre-udev hook...
00:17:19.743  [  OK  ] Finished dracut-pre-udev.service - dracut pre-udev hook.
00:17:19.743           Starting systemd-udevd.ser…ger for Device Events and Files...
00:17:19.743  [  OK  ] Started systemd-udevd.serv…nager for Device Events and Files.
00:17:19.743           Starting dracut-pre-trigge…e - dracut pre-trigger hook...
00:17:19.743  [  OK  ] Finished dracut-pre-trigge…ice - dracut pre-trigger hook.
00:17:19.743           Starting systemd-udev-trig…[0m - Coldplug All udev Devices...
00:17:19.743  [  OK  ] Finished systemd-udev-trig…e - Coldplug All udev Devices.
00:17:19.743  [  OK  ] Created slice system-modpr…lice - Slice /system/modprobe.
00:17:19.743           Starting dracut-initqueue.…ice - dracut initqueue hook...
00:17:19.743           Starting modprobe@configfs…m - Load Kernel Module configfs...
00:17:19.743  [  OK  ] Finished modprobe@configfs…[0m - Load Kernel Module configfs.
00:17:19.743  [    1.906891] virtio_blk virtio0: 2/0/0 default/read/poll queues
00:17:19.743  [    1.918936] BTRFS: device label fedora devid 1 transid 68 /dev/sda5 scanned by (udev-worker) (399)
00:17:19.743  [    1.928289] virtio_blk virtio0: [vda] 3747078144 512-byte logical blocks (1.92 TB/1.74 TiB)
00:17:19.743  [  OK  ] Found device dev-disk-by\x…device - QEMU_HARDDISK fedora.
00:17:19.743  [  OK  ] Reached target initrd-root…e.target - Initrd Root Device.
00:17:19.743  [  OK  ] Finished dracut-initqueue.…rvice - dracut initqueue hook.
00:17:19.743  [  OK  ] Reached target remote-fs-p…eparation for Remote File Systems.
00:17:19.743  [  OK  ] Reached target remote-cryp…et - Remote Encrypted Volumes.
00:17:19.743  [  OK  ] Reached target remote-fs.target - Remote File Systems.
00:17:19.743           Starting dracut-pre-mount.…ice - dracut pre-mount hook...
00:17:19.743  [  OK  ] Finished dracut-pre-mount.…rvice - dracut pre-mount hook.
00:17:19.743           Starting systemd-fsck-root…604-6023-4ba5-bb9e-80d612f84b0d...
00:17:19.743  [  OK  ] Finished systemd-fsck-root…0b604-6023-4ba5-bb9e-80d612f84b0d.
00:17:19.743           Mounting sys-kernel-config…ernel Configuration File System...
00:17:19.743           Mounting sysroot.mount - /sysroot...
00:17:19.743  [  OK  ] Mounted sys-kernel-config.… Kernel Configuration File System.
00:17:19.743  [  OK  ] Reached target sysinit.target - System Initialization.
00:17:19.743  [  OK  ] Reached target basic.target - Basic System.
00:17:19.743  [    2.306492] BTRFS info (device sda5): using crc32c (crc32c-intel) checksum algorithm
00:17:19.743  [    2.307607] BTRFS info (device sda5): using free space tree
00:17:19.743  [    2.313486] BTRFS info (device sda5): auto enabling async discard
00:17:19.743  [  OK  ] Mounted sysroot.mount - /sysroot.
00:17:19.743  [  OK  ] Reached target initrd-root…get - Initrd Root File System.
00:17:19.743           Starting initrd-parse-etc.…nts Configured in the Real Root...
00:17:19.743  [  OK  ] Finished initrd-parse-etc.…oints Configured in the Real Root.
00:17:19.743  [  OK  ] Reached target initrd-fs.target - Initrd File Systems.
00:17:19.743  [  OK  ] Reached target initrd.target - Initrd Default Target.
00:17:19.743           Starting dracut-mount.service - dracut mount hook...
00:17:19.743  [  OK  ] Finished dracut-mount.service - dracut mount hook.
00:17:19.743           Starting dracut-pre-pivot.…acut pre-pivot and cleanup hook...
00:17:19.743  [  OK  ] Finished dracut-pre-pivot.…dracut pre-pivot and cleanup hook.
00:17:19.743           Starting initrd-cleanup.se…ng Up and Shutting Down Daemons...
00:17:19.743  [  OK  ] Stopped target remote-cryp…et - Remote Encrypted Volumes.
00:17:19.743  [  OK  ] Stopped target timers.target - Timer Units.
00:17:19.743  [  OK  ] Stopped dracut-pre-pivot.s…dracut pre-pivot and cleanup hook.
00:17:19.743  [  OK  ] Stopped target initrd.target - Initrd Default Target.
00:17:19.744  [  OK  ] Stopped target basic.target - Basic System.
00:17:19.744  [  OK  ] Stopped target initrd-root…e.target - Initrd Root Device.
00:17:19.744  [  OK  ] Stopped target initrd-usr-…get - Initrd /usr File System.
00:17:19.744  [  OK  ] Stopped target paths.target - Path Units.
00:17:19.744  [  OK  ] Stopped target remote-fs.target - Remote File Systems.
00:17:19.744  [  OK  ] Stopped target remote-fs-p…eparation for Remote File Systems.
00:17:19.744  [  OK  ] Stopped target slices.target - Slice Units.
00:17:19.744  [  OK  ] Stopped target sockets.target - Socket Units.
00:17:19.744  [  OK  ] Stopped target sysinit.target - System Initialization.
00:17:19.744  [  OK  ] Stopped target swap.target - Swaps.
00:17:19.744  [  OK  ] Stopped dracut-mount.service - dracut mount hook.
00:17:19.744  [  OK  ] Stopped dracut-pre-mount.service - dracut pre-mount hook.
00:17:19.744  [  OK  ] Stopped target cryptsetup.…get - Local Encrypted Volumes.
00:17:19.744  [  OK  ] Stopped systemd-ask-passwo…quests to Console Directory Watch.
00:17:19.744  [  OK  ] Stopped dracut-initqueue.service - dracut initqueue hook.
00:17:19.744  [  OK  ] Stopped systemd-sysctl.service - Apply Kernel Variables.
00:17:19.744  [  OK  ] Stopped systemd-modules-lo…service - Load Kernel Modules.
00:17:19.744  [  OK  ] Stopped systemd-tmpfiles-s…te Volatile Files and Directories.
00:17:19.744  [  OK  ] Stopped target local-fs.target - Local File Systems.
00:17:19.744  [  OK  ] Stopped systemd-udev-trigg…e - Coldplug All udev Devices.
00:17:19.744  [  OK  ] Stopped dracut-pre-trigger…ice - dracut pre-trigger hook.
00:17:19.744           Stopping systemd-udevd.ser…ger for Device Events and Files...
00:17:19.744  [  OK  ] Finished initrd-cleanup.se…ning Up and Shutting Down Daemons.
00:17:19.744  [  OK  ] Stopped systemd-udevd.serv…nager for Device Events and Files.
00:17:19.744  [  OK  ] Closed systemd-udevd-contr….socket - udev Control Socket.
00:17:19.744  [  OK  ] Closed systemd-udevd-kernel.socket - udev Kernel Socket.
00:17:19.744  [  OK  ] Stopped dracut-pre-udev.service - dracut pre-udev hook.
00:17:19.744  [  OK  ] Stopped dracut-cmdline.service - dracut cmdline hook.
00:17:19.744  [  OK  ] Stopped dracut-cmdline-ask…for additional cmdline parameters.
00:17:19.744           Starting initrd-udevadm-cl…ice - Cleanup udev Database...
00:17:19.744  [  OK  ] Stopped systemd-tmpfiles-s…reate Static Device Nodes in /dev.
00:17:19.744  [  OK  ] Stopped kmod-static-nodes.…reate List of Static Device Nodes.
00:17:19.744  [  OK  ] Stopped systemd-vconsole-s…rvice - Setup Virtual Console.
00:17:19.744  [  OK  ] Finished initrd-udevadm-cl…rvice - Cleanup udev Database.
00:17:19.744  [  OK  ] Reached target initrd-switch-root.target - Switch Root.
00:17:19.744           Starting initrd-switch-root.service - Switch Root...
00:17:19.744  [    2.490106] systemd[1]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
00:17:19.744  [    2.588378] systemd-journald[226]: Received SIGTERM from PID 1 (systemd).
00:17:19.744  [    2.674820] systemd[1]: systemd 253.12-1.fc38 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
00:17:19.744  [    2.678037] systemd[1]: Detected virtualization kvm.
00:17:19.744  [    2.678548] systemd[1]: Detected architecture x86-64.
00:17:19.744  
00:17:19.744  Welcome to Fedora Linux 38 (Cloud Edition)!
00:17:19.744  
00:17:19.744  [  !!  ] This OS version (Fedora Linux 38 (Cloud Edition)) is past its end-of-support date (2024-05-14)
00:17:19.744  [    2.683582] systemd[1]: Hostname set to <vhostfedora-cloud-23052>.
00:17:19.744  [    2.780304] systemd[1]: bpf-lsm: LSM BPF program attached
00:17:19.744  [    2.856919] zram: Added device: zram0
00:17:19.744  [    2.982782] systemd[1]: initrd-switch-root.service: Deactivated successfully.
00:17:19.744  [    2.992289] systemd[1]: Stopped initrd-switch-root.service - Switch Root.
00:17:19.744  [  OK  ] Stopped initrd-switch-root.service - Switch Root.
00:17:19.744  [    2.994846] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
00:17:19.744  [    2.996492] systemd[1]: Created slice system-getty.slice - Slice /system/getty.
00:17:19.744  [  OK  ] Created slice system-getty.slice - Slice /system/getty.
00:17:19.744  [    2.999282] systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
00:17:19.744  [  OK  ] Created slice system-seria… - Slice /system/serial-getty.
00:17:19.744  [    3.002120] systemd[1]: Created slice system-sshd\x2dkeygen.slice - Slice /system/sshd-keygen.
00:17:19.744  [  OK  ] Created slice system-sshd\…e - Slice /system/sshd-keygen.
00:17:19.744  [    3.004973] systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
00:17:19.744  [  OK  ] Created slice system-syste… - Slice /system/systemd-fsck.
00:17:19.744  [    3.007798] systemd[1]: Created slice system-systemd\x2dzram\x2dsetup.slice - Slice /system/systemd-zram-setup.
00:17:19.744  [  OK  ] Created slice system-syste… Slice /system/systemd-zram-setup.
00:17:19.744  [    3.010585] systemd[1]: Created slice user.slice - User and Session Slice.
00:17:19.744  [  OK  ] Created slice user.slice - User and Session Slice.
00:17:19.744  [    3.012262] systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
00:17:19.744  [  OK  ] Started systemd-ask-passwo…quests to Console Directory Watch.
00:17:19.744  [    3.014398] systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
00:17:19.744  [  OK  ] Started systemd-ask-passwo… Requests to Wall Directory Watch.
00:17:19.744  [    3.016578] systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
00:17:19.744  [  OK  ] Set up automount proc-sys-…rmats File System Automount Point.
00:17:19.744  [    3.018868] systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
00:17:19.744  [  OK  ] Reached target cryptsetup.…get - Local Encrypted Volumes.
00:17:19.744  [    3.020648] systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
00:17:19.744  [  OK  ] Stopped target initrd-switch-root.target - Switch Root.
00:17:19.744  [    3.022356] systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
00:17:19.744  [  OK  ] Stopped target initrd-fs.target - Initrd File Systems.
00:17:19.744  [    3.024037] systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
00:17:19.744  [  OK  ] Stopped target initrd-root…get - Initrd Root File System.
00:17:19.744  [    3.025834] systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
00:17:19.744  [  OK  ] Reached target integrityse…Local Integrity Protected Volumes.
00:17:19.744  [    3.027758] systemd[1]: Reached target paths.target - Path Units.
00:17:19.744  [  OK  ] Reached target paths.target - Path Units.
00:17:19.744  [    3.029205] systemd[1]: Reached target remote-fs.target - Remote File Systems.
00:17:19.744  [  OK  ] Reached target remote-fs.target - Remote File Systems.
00:17:19.744  [    3.030881] systemd[1]: Reached target slices.target - Slice Units.
00:17:19.744  [  OK  ] Reached target slices.target - Slice Units.
00:17:19.744  [    3.032369] systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
00:17:19.744  [  OK  ] Reached target veritysetup… - Local Verity Protected Volumes.
00:17:19.744  [    3.034255] systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs.
00:17:19.744  [  OK  ] Listening on dm-event.sock… Device-mapper event daemon FIFOs.
00:17:19.744  [    3.036169] systemd[1]: Listening on lvm2-lvmpolld.socket - LVM2 poll daemon socket.
00:17:19.744  [  OK  ] Listening on lvm2-lvmpolld…ket - LVM2 poll daemon socket.
00:17:19.744  [    3.038894] systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
00:17:19.744  [  OK  ] Listening on systemd-cored…et - Process Core Dump Socket.
00:17:19.744  [    3.040787] systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe.
00:17:19.744  [  OK  ] Listening on systemd-initc… initctl Compatibility Named Pipe.
00:17:19.744  [    3.042818] systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket.
00:17:19.744  [  OK  ] Listening on systemd-oomd.…Out-Of-Memory (OOM) Killer Socket.
00:17:19.744  [    3.045628] systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
00:17:19.744  [  OK  ] Listening on systemd-udevd….socket - udev Control Socket.
00:17:19.744  [    3.048386] systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
00:17:19.744  [  OK  ] Listening on systemd-udevd…l.socket - udev Kernel Socket.
00:17:19.744  [    3.050674] systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
00:17:19.745  [  OK  ] Listening on systemd-userd…0m - User Database Manager Socket.
00:17:19.745  [    3.059243] systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
00:17:19.745           Mounting dev-hugepages.mount - Huge Pages File System...
00:17:19.745  [    3.062966] systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
00:17:19.745           Mounting dev-mqueue.mount…POSIX Message Queue File System...
00:17:19.745  [    3.066576] systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
00:17:19.745           Mounting sys-kernel-debug.… - Kernel Debug File System...
00:17:19.745  [    3.070030] systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
00:17:19.745           Mounting sys-kernel-tracin… - Kernel Trace File System...
00:17:19.745  [    3.073038] systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
00:17:19.745           Starting kmod-static-nodes…ate List of Static Device Nodes...
00:17:19.745  [    3.076806] systemd[1]: Starting lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
00:17:19.745           Starting lvm2-monitor.serv…ng dmeventd or progress polling...
00:17:19.745  [    3.081440] systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
00:17:19.745           Starting modprobe@configfs…m - Load Kernel Module configfs...
00:17:19.745  [    3.085039] systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
00:17:19.745           Starting modprobe@dm_mod.s…[0m - Load Kernel Module dm_mod...
00:17:19.745  [    3.090036] systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
00:17:19.745           Starting modprobe@drm.service - Load Kernel Module drm...
00:17:19.745  [    3.098396] systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
00:17:19.745           Starting modprobe@fuse.ser…e - Load Kernel Module fuse...
00:17:19.745  [    3.102321] systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
00:17:19.745           Starting modprobe@loop.ser…e - Load Kernel Module loop...
00:17:19.745  [    3.104630] systemd[1]: systemd-fsck-root.service: Deactivated successfully.
00:17:19.745  [    3.112064] loop: module loaded
00:17:19.745  [    3.112694] systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
00:17:19.745  [  OK  ] Stopped systemd-fsck-root.… File System Check on Root Device.
00:17:19.745  [    3.116316] systemd[1]: Stopped systemd-journald.service - Journal Service.
00:17:19.745  [  OK  ] Stopped systemd-journald.service - Journal Service.
00:17:19.745  [    3.119559] systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket.
00:17:19.745  [  OK  ] Listening on systemd-journ…socket - Journal Audit Socket.
00:17:19.745  [    3.126085] fuse: init (API version 7.38)
00:17:19.745  [    3.128357] systemd[1]: Starting systemd-journald.service - Journal Service...
00:17:19.745           Starting systemd-journald.service - Journal Service...
00:17:19.745  [    3.132264] systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
00:17:19.745           Starting systemd-modules-l…rvice - Load Kernel Modules...
00:17:19.745  [    3.135308] systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
00:17:19.745           Starting systemd-network-g… units from Kernel command line...
00:17:19.745  [    3.137561] systemd[1]: systemd-pcrmachine.service - TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
00:17:19.745  [    3.141478] systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
00:17:19.745           Starting systemd-remount-f…nt Root and Kernel File Systems...
00:17:19.745  [    3.144329] systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
00:17:19.745           Starting systemd-udev-trig…[0m - Coldplug All udev Devices...
00:17:19.745  [    3.150716] systemd-journald[522]: Collecting audit messages is enabled.
00:17:19.745  [    3.153317] systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
00:17:19.745  [  OK  [[    3.154152] BTRFS info (device sda5: state M): use zstd compression, level 1
00:17:19.745  0m] Mounted dev-hugepages.mount - Huge Pages File System.
00:17:19.745  [    3.156440] systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
00:17:19.745  [  OK  ] Mounted dev-mqueue.mount[…- POSIX Message Queue File System.
00:17:19.745  [    3.158935] systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
00:17:19.745  [  OK  ] Mounted sys-kernel-debug.m…nt - Kernel Debug File System.
00:17:19.745  [    3.162301] systemd[1]: Started systemd-journald.service - Journal Service.
00:17:19.745  [  OK  ] Started systemd-journald.service - Journal Service.
00:17:19.745  [    3.166268] audit: type=1130 audit(1732009211.879:2): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [  OK  ] Mounted sys-kernel-tracing…nt - Kernel Trace File System.
00:17:19.745  [  OK  ] Finished kmod-static-nodes…reate List of Static Device Nodes.
00:17:19.745  [    3.176410] audit: type=1130 audit(1732009211.889:3): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [  OK  ] Finished lvm2-monitor.serv…sing dmeventd or progress polling.
00:17:19.745  [    3.182974] audit: type=1130 audit(1732009211.895:4): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [  OK  ] Finished modprobe@configfs…[0m - Load Kernel Module configfs.
00:17:19.745  [    3.189244] audit: type=1130 audit(1732009211.901:5): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [    3.191858] audit: type=1131 audit(1732009211.901:6): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [  OK  ] Finished modprobe@dm_mod.s…e - Load Kernel Module dm_mod.
00:17:19.745  [    3.196948] audit: type=1130 audit(1732009211.909:7): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [    3.199106] audit: type=1131 audit(1732009211.909:8): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [  OK  ] Finished modprobe@drm.service - Load Kernel Module drm.
00:17:19.745  [    3.202853] audit: type=1130 audit(1732009211.915:9): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [    3.204748] audit: type=1131 audit(1732009211.915:10): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
00:17:19.745  [  OK  ] Finished modprobe@fuse.service - Load Kernel Module fuse.
00:17:19.745  [  OK  ] Finished modprobe@loop.service - Load Kernel Module loop.
00:17:19.745  [  OK  ] Finished systemd-modules-l…service - Load Kernel Modules.
00:17:19.745  [  OK  ] Finished systemd-network-g…rk units from Kernel command line.
00:17:19.745  [  OK  ] Finished systemd-remount-f…ount Root and Kernel File Systems.
00:17:19.745  [  OK  ] Finished systemd-udev-trig…e - Coldplug All udev Devices.
00:17:19.745  [  OK  ] Reached target network-pre…get - Preparation for Network.
00:17:19.745           Mounting sys-fs-fuse-conne… - FUSE Control File System...
00:17:19.745           Starting systemd-journal-f…h Journal to Persistent Storage...
00:17:19.745           Starting systemd-random-se… - Load/Save OS Random Seed...
00:17:19.745  [    3.257119] systemd-journald[522]: Received client request to flush runtime journal.
00:17:19.745           Starting systemd-sysctl.se…ce - Apply Kernel Variables...
00:17:19.745           Starting systemd-tmpfiles-…ate Static Device Nodes in /dev...
00:17:19.745  [  OK  ] Mounted sys-fs-fuse-connec…nt - FUSE Control File System.
00:17:19.745  [    3.301514] systemd-journald[522]: /var/log/journal/366559fdb5094326b8eb2443d98146f4/system.journal: Monotonic clock jumped backwards relative to last journal entry, rotating.
00:17:19.745  [    3.303066] systemd-journald[522]: Rotating system journal.
00:17:19.745  [  OK  ] Finished systemd-random-se…ce - Load/Save OS Random Seed.
00:17:19.745  [  OK  ] Finished systemd-sysctl.service - Apply Kernel Variables.
00:17:19.745  [  OK  ] Finished systemd-tmpfiles-…reate Static Device Nodes in /dev.
00:17:19.745  [  OK  ] Reached target local-fs-pr…reparation for Local File Systems.
00:17:19.745           Starting systemd-udevd.ser…ger for Device Events and Files...
00:17:19.745  [  OK  ] Started systemd-udevd.serv…nager for Device Events and Files.
00:17:19.745           Starting modprobe@configfs…m - Load Kernel Module configfs...
00:17:19.745  [  OK  ] Finished modprobe@configfs…[0m - Load Kernel Module configfs.
00:17:19.745  [  OK  ] Finished systemd-journal-f…ush Journal to Persistent Storage.
00:17:19.745  [  OK  ] Found device dev-zram0.device - /dev/zram0.
00:17:19.745           Starting systemd-zram-setu…[0m - Create swap on /dev/zram0...
00:17:19.745  [    3.449405] zram0: detected capacity change from 0 to 1937408
00:17:19.745  [  OK  ] Finished systemd-zram-setu…e - Create swap on /dev/zram0.
00:17:19.745           Activating swap dev-zram0.…- Compressed Swap on /dev/zram0...
00:17:19.745  [    3.493200] Adding 968700k swap on /dev/zram0.  Priority:100 extents:1 across:968700k SSDscFS
00:17:19.745  [  OK  ] Activated swap dev-zram0.s…m - Compressed Swap on /dev/zram0.
00:17:19.745  [  OK  ] Reached target swap.target - Swaps.
00:17:19.745           Mounting tmp.mount - Temporary Directory /tmp...
00:17:19.745  [  OK  ] Mounted tmp.mount - Temporary Directory /tmp.
00:17:19.745  [    3.570715] parport_pc 00:03: reported by Plug and Play ACPI
00:17:19.745  [    3.572085] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
00:17:19.745  [    3.578020] bochs-drm 0000:00:02.0: vgaarb: deactivate vga console
00:17:19.745  [    3.579812] e1000: Intel(R) PRO/1000 Network Driver
00:17:19.745  [    3.579814] e1000: Copyright (c) 1999-2006 Intel Corporation.
00:17:19.745  [    3.590080] Console: switching to colour dummy device 80x25
00:17:19.745  [    3.614263] piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
00:17:19.745  [    3.614348] [drm] Found bochs VGA, ID 0xb0c5.
00:17:19.745  [    3.615483] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebf0000.
00:17:19.745  [    3.619166] [drm] Found EDID data blob.
00:17:19.746  [    3.620062] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 0
00:17:19.746  [    3.622605] fbcon: bochs-drmdrmfb (fb0) is primary device
00:17:19.746  [    3.624390] ACPI: \_SB_.LNKC: Enabled at IRQ 10
00:17:19.746  [    3.626423] Console: switching to colour frame buffer device 160x50
00:17:19.746  [    3.630013] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
00:17:19.746  [    3.641144] bochs-drm 0000:00:02.0: [drm] fb0: bochs-drmdrmfb frame buffer device
00:17:19.746  [    3.690544] ppdev: user-space parallel port driver
00:17:19.746           Starting systemd-fsck@dev-… on /dev/disk/by-uuid/6C81-19BE...
00:17:19.746  [    3.961101] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 52:54:00:12:34:56
00:17:19.746  [    3.961789] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
00:17:19.746  [  OK  ] Finished systemd-fsck@dev-…ck on /dev/disk/by-uuid/6C81-19BE.
00:17:19.746           Starting systemd-fsck@dev-…f56-74ee-4fb3-9748-b79bb5f6c1bc...
00:17:19.746  [  OK  ] Found device dev-disk-by\x…device - QEMU_HARDDISK fedora.
00:17:19.746  [  OK  ] Finished systemd-fsck@dev-…57f56-74ee-4fb3-9748-b79bb5f6c1bc.
00:17:19.746  WARN: SEABIOS LOG:
00:17:19.746  SeaBIOS (version rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org)
00:17:19.746  BUILD: gcc: (GCC) 12.2.1 20220819 (Red Hat Cross 12.2.1-2) binutils: version 2.38-5.fc37
00:17:19.746  No Xen hypervisor found.
00:17:19.746  Running on QEMU (i440fx)
00:17:19.746  Running on KVM
00:17:19.746  Found QEMU fw_cfg
00:17:19.746  QEMU fw_cfg DMA interface supported
00:17:19.746  qemu/e820: addr 0x0000000000000000 len 0x0000000040000000 [RAM]
00:17:19.746  Relocating init from 0x000d4980 to 0x3efeb360 (size 84992)
00:17:19.746  Moving pm_base to 0x600
00:17:19.746  boot order:
00:17:19.746  1: /pci@i0cf8/ide@1,1/drive@1/disk@0
00:17:19.746  kvmclock: at 0xe8ca0 (msr 0x4b564d01)
00:17:19.746  kvmclock: stable tsc, 2294 MHz
00:17:19.746  CPU Mhz=2294 (kvmclock)
00:17:19.746  === PCI bus & bridge init ===
00:17:19.746  PCI: pci_bios_init_bus_rec bus = 0x0
00:17:19.746  === PCI device probing ===
00:17:19.746  Found 7 PCI devices (max PCI bus is 00)
00:17:19.746  === PCI new allocation pass #1 ===
00:17:19.746  PCI: check devices
00:17:19.746  === PCI new allocation pass #2 ===
00:17:19.746  PCI: IO: c000 - c0cf
00:17:19.746  PCI: 32: 0000000080000000 - 00000000fec00000
00:17:19.746  PCI: map device bdf=00:04.0  bar 0, addr 0000c000, size 00000080 [io]
00:17:19.746  PCI: map device bdf=00:03.0  bar 1, addr 0000c080, size 00000040 [io]
00:17:19.746  PCI: map device bdf=00:01.1  bar 4, addr 0000c0c0, size 00000010 [io]
00:17:19.746  PCI: map device bdf=00:03.0  bar 6, addr feb80000, size 00040000 [mem]
00:17:19.746  PCI: map device bdf=00:03.0  bar 0, addr febc0000, size 00020000 [mem]
00:17:19.746  PCI: map device bdf=00:02.0  bar 6, addr febe0000, size 00010000 [mem]
00:17:19.746  PCI: map device bdf=00:02.0  bar 2, addr febf0000, size 00001000 [mem]
00:17:19.746  PCI: map device bdf=00:04.0  bar 1, addr febf1000, size 00001000 [mem]
00:17:19.746  PCI: map device bdf=00:02.0  bar 0, addr fd000000, size 01000000 [prefmem]
00:17:19.746  PCI: map device bdf=00:04.0  bar 4, addr fe000000, size 00004000 [prefmem]
00:17:19.746  PCI: init bdf=00:00.0 id=8086:1237
00:17:19.746  PCI: init bdf=00:01.0 id=8086:7000
00:17:19.746  PIIX3/PIIX4 init: elcr=00 0c
00:17:19.746  PCI: init bdf=00:01.1 id=8086:7010
00:17:19.746  PCI: init bdf=00:01.3 id=8086:7113
00:17:19.746  PCI: init bdf=00:02.0 id=1234:1111
00:17:19.746  PCI: init bdf=00:03.0 id=8086:100e
00:17:19.746  PCI: init bdf=00:04.0 id=1af4:1001
00:17:19.746  PCI: Using 00:02.0 for primary VGA
00:17:19.746  handle_smp: apic_id=0x1
00:17:19.746  Found 2 cpu(s) max supported 2 cpu(s)
00:17:19.746  Copying PIR from 0x3efffc40 to 0x000f5c80
00:17:19.746  Copying MPTABLE from 0x00006cfc/3efe2110 to 0x000f5b90
00:17:19.746  Copying SMBIOS from 0x00006cfc to 0x000f59d0
00:17:19.746  table(50434146)=0x3ffe1a03 (via rsdt)
00:17:19.746  ACPI: parse DSDT at 0x3ffe0040 (len 6595)
00:17:19.746  Scan for VGA option rom
00:17:19.746  Running option rom at c000:0003
00:17:19.746  Start SeaVGABIOS (version rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org)
00:17:19.746  VGABUILD: gcc: (GCC) 12.2.1 20220819 (Red Hat Cross 12.2.1-2) binutils: version 2.38-5.fc37
00:17:19.746  enter vga_post:
00:17:19.746     a=00000010  b=0000ffff  c=00000000  d=0000ffff ds=0000 es=f000 ss=0000
00:17:19.746    si=00000000 di=00006060 bp=00000000 sp=00006d1a cs=f000 ip=d015  f=0000
00:17:19.746  VBE DISPI: bdf 00:02.0, bar 0
00:17:19.746  VBE DISPI: lfb_addr=fd000000, size 16 MB
00:17:19.746  Removing mode 19a
00:17:19.746  Removing mode 19b
00:17:19.746  Removing mode 19c
00:17:19.746  Removing mode 19d
00:17:19.746  Removing mode 19e
00:17:19.746  Attempting to allocate 512 bytes lowmem via pmm call to f000:d0b1
00:17:19.746  pmm call arg1=0
00:17:19.746  VGA stack allocated at e8aa0
00:17:19.746  Turning on vga text mode console
00:17:19.746  set VGA mode 3
00:17:19.746  SeaBIOS (version rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org)
00:17:19.746  Searching bootorder for: /pci@i0cf8/isa@1/fdc@03f0/floppy@0
00:17:19.746  ATA controller 1 at 1f0/3f4/0 (irq 14 dev 9)
00:17:19.746  ATA controller 2 at 170/374/0 (irq 15 dev 9)
00:17:19.746  Searching bootorder for: HALT
00:17:19.746  found virtio-blk at 00:04.0
00:17:19.746  pci dev 00:04.0 virtio cap at 0x84 type 5 [pci cfg access]
00:17:19.746  pci dev 00:04.0 virtio cap at 0x70 type 2 bar 4 at 0xfe000000 off +0x3000 [mmio]
00:17:19.746  pci dev 00:04.0 virtio cap at 0x60 type 4 bar 4 at 0xfe000000 off +0x2000 [mmio]
00:17:19.746  pci dev 00:04.0 virtio cap at 0x50 type 3 bar 4 at 0xfe000000 off +0x1000 [mmio]
00:17:19.746  pci dev 00:04.0 virtio cap at 0x40 type 1 bar 4 at 0xfe000000 off +0x0000 [mmio]
00:17:19.746  pci dev 00:04.0 using modern (1.0) virtio mode
00:17:19.746  Searching bootorder for: /pci@i0cf8/*@4
00:17:19.746  Searching bios-geometry for: /pci@i0cf8/*@4
00:17:19.746  ata1-0: QEMU HARDDISK ATA-7 Hard-Disk (5120 MiBytes)
00:17:19.746  Searching bootorder for: /pci@i0cf8/*@1,1/drive@1/disk@0
00:17:19.746  Searching bios-geometry for: /pci@i0cf8/*@1,1/drive@1/disk@0
00:17:19.746  Found 1 lpt ports
00:17:19.746  Found 1 serial ports
00:17:19.746  PS2 keyboard initialized
00:17:19.746  All threads complete.
00:17:19.746  Scan for option roms
00:17:19.746  Running option rom at ca00:0003
00:17:19.746  pmm call arg1=1
00:17:19.746  pmm call arg1=0
00:17:19.746  pmm call arg1=1
00:17:19.746  pmm call arg1=0
00:17:19.746  Searching bootorder for: /pci@i0cf8/*@3
00:17:19.746  Searching bootorder for: /rom@genroms/kvmvapic.bin
00:17:19.746  Searching bootorder for: HALT
00:17:19.746  drive 0x000f5900: PCHS=10402/16/63 translation=lba LCHS=652/255/63 s=10485760
00:17:19.746  drive 0x000e8a30: PCHS=0/0/0 translation=lba LCHS=1024/255/63 s=3747078144
00:17:19.746  Running option rom at cb00:0003
00:17:19.746  Space available for UMB: cd800-e8800, f5500-f5900
00:17:19.746  Returned 16633856 bytes of ZoneHigh
00:17:19.746  e820 map has 7 items:
00:17:19.746    0: 0000000000000000 - 000000000009fc00 = 1 RAM
00:17:19.746    1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
00:17:19.746    2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
00:17:19.746    3: 0000000000100000 - 000000003ffdd000 = 1 RAM
00:17:19.746    4: 000000003ffdd000 - 0000000040000000 = 2 RESERVED
00:17:19.746    5: 00000000feffc000 - 00000000ff000000 = 2 RESERVED
00:17:19.746    6: 00000000fffc0000 - 0000000100000000 = 2 RESERVED
00:17:19.746  enter handle_19:
00:17:19.746    NULL
00:17:19.746  Booting from Hard Disk...
00:17:19.746  Booting from 0000:7c00
00:17:19.746  VBE mode info request: 100
00:17:19.746  VBE mode info request: 101
00:17:19.746  VBE mode info request: 102
00:17:19.746  VBE mode info request: 103
00:17:19.746  VBE mode info request: 104
00:17:19.746  VBE mode info request: 105
00:17:19.746  VBE mode info request: 106
00:17:19.746  VBE mode info request: 107
00:17:19.746  VBE mode info request: 10d
00:17:19.746  VBE mode info request: 10e
00:17:19.746  VBE mode info request: 10f
00:17:19.746  VBE mode info request: 110
00:17:19.746  VBE mode info request: 111
00:17:19.746  VBE mode info request: 112
00:17:19.746  VBE mode info request: 113
00:17:19.746  VBE mode info request: 114
00:17:19.746  VBE mode info request: 115
00:17:19.746  VBE mode info request: 116
00:17:19.746  VBE mode info request: 117
00:17:19.746  VBE mode info request: 118
00:17:19.746  VBE mode info request: 119
00:17:19.746  VBE mode info request: 11a
00:17:19.746  VBE mode info request: 11b
00:17:19.746  VBE mode info request: 11c
00:17:19.746  VBE mode info request: 11d
00:17:19.746  VBE mode info request: 11e
00:17:19.746  VBE mode info request: 11f
00:17:19.746  VBE mode info request: 140
00:17:19.746  VBE mode info request: 141
00:17:19.746  VBE mode info request: 142
00:17:19.746  VBE mode info request: 143
00:17:19.746  VBE mode info request: 144
00:17:19.746  VBE mode info request: 145
00:17:19.746  VBE mode info request: 146
00:17:19.746  VBE mode info request: 147
00:17:19.746  VBE mode info request: 148
00:17:19.746  VBE mode info request: 149
00:17:19.746  VBE mode info request: 14a
00:17:19.746  VBE mode info request: 14b
00:17:19.746  VBE mode info request: 14c
00:17:19.746  VBE mode info request: 175
00:17:19.746  VBE mode info request: 176
00:17:19.746  VBE mode info request: 177
00:17:19.746  VBE mode info request: 178
00:17:19.746  VBE mode info request: 179
00:17:19.746  VBE mode info request: 17a
00:17:19.746  VBE mode info request: 17b
00:17:19.746  VBE mode info request: 17c
00:17:19.746  VBE mode info request: 17d
00:17:19.746  VBE mode info request: 17e
00:17:19.746  VBE mode info request: 17f
00:17:19.746  VBE mode info request: 180
00:17:19.746  VBE mode info request: 181
00:17:19.746  VBE mode info request: 182
00:17:19.746  VBE mode info request: 183
00:17:19.746  VBE mode info request: 184
00:17:19.746  VBE mode info request: 185
00:17:19.746  VBE mode info request: 186
00:17:19.746  VBE mode info request: 187
00:17:19.746  VBE mode info request: 188
00:17:19.746  VBE mode info request: 189
00:17:19.746  VBE mode info request: 18a
00:17:19.746  VBE mode info request: 18b
00:17:19.746  VBE mode info request: 18c
00:17:19.746  VBE mode info request: 18d
00:17:19.746  VBE mode info request: 18e
00:17:19.746  VBE mode info request: 18f
00:17:19.746  VBE mode info request: 190
00:17:19.747  VBE mode info request: 191
00:17:19.747  VBE mode info request: 192
00:17:19.747  VBE mode info request: 193
00:17:19.747  VBE mode info request: 194
00:17:19.747  VBE mode info request: 195
00:17:19.747  VBE mode info request: 196
00:17:19.747  VBE mode info request: 197
00:17:19.747  VBE mode info request: 198
00:17:19.747  VBE mode info request: 199
00:17:19.747  VBE mode info request: 0
00:17:19.747  VBE mode info request: 1
00:17:19.747  VBE mode info request: 2
00:17:19.747  VBE mode info request: 3
00:17:19.747  VBE mode info request: 4
00:17:19.747  VBE mode info request: 5
00:17:19.747  VBE mode info request: 6
00:17:19.747  VBE mode info request: 7
00:17:19.747  VBE mode info request: d
00:17:19.747  VBE mode info request: e
00:17:19.747  VBE mode info request: f
00:17:19.747  VBE mode info request: 10
00:17:19.747  VBE mode info request: 11
00:17:19.747  VBE mode info request: 12
00:17:19.747  VBE mode info request: 13
00:17:19.747  VBE mode info request: 6a
00:17:19.747  set VGA mode 3
00:17:19.747  WARN: ================
00:17:19.747   10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@951 -- # return 1
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@154 -- # clean_lvol_cfg
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@45 -- # notice 'Removing lvol bdevs'
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removing lvol bdevs'
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removing lvol bdevs'
00:17:19.747  INFO: Removing lvol bdevs
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@46 -- # for lvol_bdev in "${lvol_bdevs[@]}"
00:17:19.747    10:45:05 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@47 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock -t 120 bdev_lvol_delete 60507f04-c1ef-46ba-aad8-1a3324c47e26
00:17:19.747  [2024-11-19 10:45:05.957036] vhost_blk.c:1221:vhost_user_bdev_remove_cb: *WARNING*: naa.0.0: hot-removing bdev - all further requests will fail.
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@48 -- # notice 'lvol bdev 60507f04-c1ef-46ba-aad8-1a3324c47e26 removed'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'lvol bdev 60507f04-c1ef-46ba-aad8-1a3324c47e26 removed'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: lvol bdev 60507f04-c1ef-46ba-aad8-1a3324c47e26 removed'
00:17:19.747  INFO: lvol bdev 60507f04-c1ef-46ba-aad8-1a3324c47e26 removed
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@51 -- # notice 'Removing lvol stores'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Removing lvol stores'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Removing lvol stores'
00:17:19.747  INFO: Removing lvol stores
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@52 -- # /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock -t 120 bdev_lvol_delete_lvstore -u b230a30c-0e18-4157-b337-40b03d91a0e1
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@53 -- # notice 'lvol store b230a30c-0e18-4157-b337-40b03d91a0e1 removed'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'lvol store b230a30c-0e18-4157-b337-40b03d91a0e1 removed'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: lvol store b230a30c-0e18-4157-b337-40b03d91a0e1 removed'
00:17:19.747  INFO: lvol store b230a30c-0e18-4157-b337-40b03d91a0e1 removed
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- lvol/lvol_test.sh@154 -- # error_exit '' 154
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1279 -- # trap - ERR
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1280 -- # print_backtrace
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]]
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1159 -- # args=('154' '' '--ctrl-type=spdk_vhost_blk' '--fio-bin=/usr/src/fio-static/fio' '-x')
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1159 -- # local args
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1161 -- # xtrace_disable
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:17:19.747  ========== Backtrace start: ==========
00:17:19.747  
00:17:19.747  in /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh:154 -> error_exit([""],["154"])
00:17:19.747       ...
00:17:19.747     149 		local vhost_log_file="$vhost_dir/vhost.log"
00:17:19.747     150 		local vhost_pid_file="$vhost_dir/vhost.pid"
00:17:19.747     151 		local vhost_socket="$vhost_dir/usvhost"
00:17:19.747     152 		notice "starting vhost app in background"
00:17:19.747     153 		[[ -r "$vhost_pid_file" ]] && vhost_kill $vhost_name
00:17:19.747  => 154 		[[ -d $vhost_dir ]] && rm -f $vhost_dir/*
00:17:19.747     155 		mkdir -p $vhost_dir
00:17:19.747     156 	
00:17:19.747     157 		if [[ ! -x $vhost_app ]]; then
00:17:19.747     158 			error "application not found: $vhost_app"
00:17:19.747     159 			return 1
00:17:19.747       ...
00:17:19.747  in /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh:154 -> main(["-x"],["--fio-bin=/usr/src/fio-static/fio"],["--ctrl-type=spdk_vhost_blk"])
00:17:19.747       ...
00:17:19.747     149 	
00:17:19.747     150 	$rpc_py vhost_get_controllers
00:17:19.747     151 	
00:17:19.747     152 	# Run VMs
00:17:19.747     153 	vm_run $used_vms
00:17:19.747  => 154 	vm_wait_for_boot 300 $used_vms
00:17:19.747     155 	
00:17:19.747     156 	# Get disk names from VMs and run FIO traffic
00:17:19.747     157 	
00:17:19.747     158 	fio_disks=""
00:17:19.747     159 	for vm_num in $used_vms; do
00:17:19.747       ...
00:17:19.747  
00:17:19.747  ========== Backtrace end ==========
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1198 -- # return 0
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1281 -- # set +e
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1282 -- # error 'Error on  154'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@82 -- # echo ===========
00:17:19.747  ===========
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@83 -- # message ERROR 'Error on  154'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=ERROR
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'ERROR: Error on  154'
00:17:19.747  ERROR: Error on  154
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@84 -- # echo ===========
00:17:19.747  ===========
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@86 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1284 -- # at_app_exit
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1263 -- # local vhost_name
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1265 -- # notice 'APP EXITING'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'APP EXITING'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: APP EXITING'
00:17:19.747  INFO: APP EXITING
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1266 -- # notice 'killing all VMs'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'killing all VMs'
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: killing all VMs'
00:17:19.747  INFO: killing all VMs
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1267 -- # vm_kill_all
00:17:19.747    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@476 -- # local vm
00:17:19.747     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@477 -- # vm_list_all
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@466 -- # vms=()
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@466 -- # local vms
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@478 -- # vm_kill 0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@309 -- # return 0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@449 -- # local vm_pid
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@450 -- # vm_pid=1883607
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=1883607)'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=1883607)'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=1883607)'
00:17:19.748  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=1883607)
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@454 -- # /bin/kill 1883607
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_STATUS
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) new device status(0x00000000):
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-RESET: 1
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-ACKNOWLEDGE: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FEATURES_OK: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DRIVER_OK: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-DEVICE_NEED_RESET: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) 	-FAILED: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 0
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_SET_VRING_ENABLE
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) set queue enable: 0 to qp idx: 1
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@455 -- # notice 'process 1883607 killed'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'process 1883607 killed'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:0 file:49
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) read message VHOST_USER_GET_VRING_BASE
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vring base idx:1 file:51
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: process 1883607 killed'
00:17:19.748  INFO: process 1883607 killed
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:17:19.748  VHOST_CONFIG: (/root/vhost_test/vhost/0/naa.0.0) vhost peer closed
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1269 -- # notice 'killing vhost app'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'killing vhost app'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost app'
00:17:19.748  INFO: killing vhost app
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1271 -- # for vhost_name in "$TARGET_DIR"/*
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1272 -- # basename /root/vhost_test/vhost/0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1272 -- # vhost_kill 0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@202 -- # local rc=0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@203 -- # local vhost_name=0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@210 -- # local vhost_dir
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@211 -- # get_vhost_dir 0
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=0
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@220 -- # local vhost_pid
00:17:19.748     10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@221 -- # vhost_pid=1882490
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@222 -- # notice 'killing vhost (PID 1882490) app'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 1882490) app'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 1882490) app'
00:17:19.748  INFO: killing vhost (PID 1882490) app
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@224 -- # kill -INT 1882490
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:17:19.748  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i = 0 ))
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1882490
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:17:19.748  .
00:17:19.748    10:45:08 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:17:20.316    10:45:09 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:17:20.316    10:45:09 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:20.316    10:45:09 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1882490
00:17:20.316    10:45:09 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:17:20.316  .
00:17:20.316    10:45:09 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:17:21.250    10:45:10 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:17:21.250    10:45:10 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:21.250    10:45:10 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1882490
00:17:21.250    10:45:10 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@228 -- # echo .
00:17:21.250  .
00:17:21.250    10:45:10 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@229 -- # sleep 1
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i++ ))
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@226 -- # (( i < 60 ))
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@227 -- # kill -0 1882490
00:17:22.189  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (1882490) - No such process
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@231 -- # break
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@234 -- # kill -0 1882490
00:17:22.189  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (1882490) - No such process
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@239 -- # kill -0 1882490
00:17:22.189  /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (1882490) - No such process
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@245 -- # is_pid_child 1882490
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1668 -- # local pid=1882490 _pid
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1670 -- # read -r _pid
00:17:22.189     10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1667 -- # jobs -pr
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1671 -- # (( pid == _pid ))
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1670 -- # read -r _pid
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1674 -- # return 1
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@261 -- # return 0
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1271 -- # for vhost_name in "$TARGET_DIR"/*
00:17:22.189     10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1272 -- # basename /root/vhost_test/vhost/3
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1272 -- # vhost_kill 3
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@202 -- # local rc=0
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@203 -- # local vhost_name=3
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@205 -- # [[ -z 3 ]]
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@210 -- # local vhost_dir
00:17:22.189     10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@211 -- # get_vhost_dir 3
00:17:22.189     10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@105 -- # local vhost_name=3
00:17:22.189     10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@107 -- # [[ -z 3 ]]
00:17:22.189     10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/3
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/3
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/3/vhost.pid
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/3/vhost.pid ]]
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@215 -- # warning 'no vhost pid file found'
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found'
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=WARN
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found'
00:17:22.189  WARN: no vhost pid file found
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@216 -- # return 0
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1275 -- # notice 'EXIT DONE'
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@94 -- # message INFO 'EXIT DONE'
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@60 -- # local verbose_out
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@61 -- # false
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@62 -- # verbose_out=
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@70 -- # shift
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@71 -- # echo -e 'INFO: EXIT DONE'
00:17:22.189  INFO: EXIT DONE
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- vhost/common.sh@1285 -- # exit 1
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1129 -- # trap - ERR
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1129 -- # print_backtrace
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]]
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1159 -- # args=('--ctrl-type=spdk_vhost_blk' '--fio-bin=/usr/src/fio-static/fio' '-x' '/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh' 'vhost_blk_lvol_integrity' '--iso')
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1159 -- # local args
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1161 -- # xtrace_disable
00:17:22.189    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@10 -- # set +x
00:17:22.189  ========== Backtrace start: ==========
00:17:22.189  
00:17:22.189  in /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["vhost_blk_lvol_integrity"],["/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/lvol/lvol_test.sh"],["-x"],["--fio-bin=/usr/src/fio-static/fio"],["--ctrl-type=spdk_vhost_blk"])
00:17:22.189       ...
00:17:22.189     1124		timing_enter $test_name
00:17:22.189     1125		echo "************************************"
00:17:22.189     1126		echo "START TEST $test_name"
00:17:22.189     1127		echo "************************************"
00:17:22.189     1128		xtrace_restore
00:17:22.189     1129		time "$@"
00:17:22.189     1130		xtrace_disable
00:17:22.189     1131		echo "************************************"
00:17:22.189     1132		echo "END TEST $test_name"
00:17:22.189     1133		echo "************************************"
00:17:22.189     1134		timing_exit $test_name
00:17:22.189       ...
00:17:22.449  in /var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost.sh:65 -> main(["--iso"])
00:17:22.449       ...
00:17:22.449     60  	echo 'Running lvol integrity suite...'
00:17:22.449     61  	run_test "vhost_scsi_lvol_integrity" $WORKDIR/lvol/lvol_test.sh -x --fio-bin=$FIO_BIN \
00:17:22.449     62  		--ctrl-type=spdk_vhost_scsi --thin-provisioning
00:17:22.449     63  	
00:17:22.449     64  	echo 'Running lvol integrity suite...'
00:17:22.449  => 65  	run_test "vhost_blk_lvol_integrity" $WORKDIR/lvol/lvol_test.sh -x --fio-bin=$FIO_BIN \
00:17:22.449     66  		--ctrl-type=spdk_vhost_blk
00:17:22.449     67  	
00:17:22.449     68  	echo 'Running blk packed ring integrity suite...'
00:17:22.449     69  	run_test "vhost_blk_packed_ring_integrity" $WORKDIR/fiotest/fio.sh -x --fio-bin=$FIO_BIN \
00:17:22.449     70  		--vm=0,$VM_IMAGE,Nvme0n1p0 \
00:17:22.449       ...
00:17:22.449  
00:17:22.449  ========== Backtrace end ==========
00:17:22.449    10:45:11 vhost.vhost_blk_lvol_integrity -- common/autotest_common.sh@1198 -- # return 0
00:17:22.449  
00:17:22.449  real	5m26.174s
00:17:22.449  user	21m24.973s
00:17:22.449  sys	0m6.869s
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@1129 -- # trap - ERR
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@1129 -- # print_backtrace
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]]
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@1159 -- # args=('--iso' '/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost.sh' 'vhost' '/var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf')
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@1159 -- # local args
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@1161 -- # xtrace_disable
00:17:22.449    10:45:11 vhost -- common/autotest_common.sh@10 -- # set +x
00:17:22.449  ========== Backtrace start: ==========
00:17:22.449  
00:17:22.449  in /var/jenkins/workspace/vhost-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["vhost"],["/var/jenkins/workspace/vhost-phy-autotest/spdk/test/vhost/vhost.sh"],["--iso"])
00:17:22.449       ...
00:17:22.449     1124		timing_enter $test_name
00:17:22.449     1125		echo "************************************"
00:17:22.449     1126		echo "START TEST $test_name"
00:17:22.449     1127		echo "************************************"
00:17:22.449     1128		xtrace_restore
00:17:22.449     1129		time "$@"
00:17:22.449     1130		xtrace_disable
00:17:22.449     1131		echo "************************************"
00:17:22.449     1132		echo "END TEST $test_name"
00:17:22.449     1133		echo "************************************"
00:17:22.449     1134		timing_exit $test_name
00:17:22.449       ...
00:17:22.449  in /var/jenkins/workspace/vhost-phy-autotest/spdk/autotest.sh:312 -> main(["/var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf"])
00:17:22.449       ...
00:17:22.449     307 		# goes to a single node as we share hugepages with qemu instance(s) and we
00:17:22.449     308 		# cannot split it across all numa nodes without making sure there's enough
00:17:22.449     309 		# memory available.
00:17:22.449     310 	
00:17:22.449     311 		if [ $SPDK_TEST_VHOST -eq 1 ]; then
00:17:22.449  => 312 			HUGENODE=0 run_test "vhost" $rootdir/test/vhost/vhost.sh --iso
00:17:22.449     313 		fi
00:17:22.449     314 	
00:17:22.449     315 		if [ $SPDK_TEST_VFIOUSER_QEMU -eq 1 ]; then
00:17:22.449     316 			HUGENODE=0 run_test "vfio_user_qemu" $rootdir/test/vfio_user/vfio_user.sh --iso
00:17:22.449     317 		fi
00:17:22.449       ...
00:17:22.449  
00:17:22.449  ========== Backtrace end ==========
00:17:22.449    10:45:12 vhost -- common/autotest_common.sh@1198 -- # return 0
00:17:22.449  
00:17:22.449  real	7m58.117s
00:17:22.449  user	27m21.279s
00:17:22.449  sys	0m37.097s
00:17:22.449   10:45:12 vhost -- common/autotest_common.sh@1 -- # autotest_cleanup
00:17:22.449   10:45:12 vhost -- common/autotest_common.sh@1396 -- # local autotest_es=1
00:17:22.449   10:45:12 vhost -- common/autotest_common.sh@1397 -- # xtrace_disable
00:17:22.449   10:45:12 vhost -- common/autotest_common.sh@10 -- # set +x
00:17:37.337  INFO: APP EXITING
00:17:37.337  INFO: killing all VMs
00:17:37.337  INFO: killing vhost app
00:17:37.337  WARN: no vhost pid file found
00:17:37.337  INFO: EXIT DONE
00:17:39.871  Waiting for block devices as requested
00:17:40.130  0000:5e:00.0 (144d a80a): vfio-pci -> nvme
00:17:40.130  0000:af:00.0 (8086 2701): vfio-pci -> nvme
00:17:40.389  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:17:40.389  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:17:40.389  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:17:40.647  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:17:40.647  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:17:40.647  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:17:40.647  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:17:40.906  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:17:40.906  0000:b0:00.0 (8086 2701): vfio-pci -> nvme
00:17:40.906  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:17:41.164  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:17:41.164  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:17:41.164  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:17:41.422  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:17:41.422  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:17:41.422  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:17:41.680  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:17:44.967  Cleaning
00:17:44.967  Removing:    /dev/shm/spdk_tgt_trace.pid1836205
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1834589
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1836205
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1836969
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1838064
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1838501
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1839591
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1839778
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1840391
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1841502
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1842370
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1843149
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1843699
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1844325
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1844830
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1845124
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1845411
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1845647
00:17:44.967  Removing:    /var/run/dpdk/spdk_pid1846435
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1849748
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1850327
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1850900
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1851085
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1852536
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1852610
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1854085
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1854267
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1854821
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1854891
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1855430
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1855606
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1856822
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1857026
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1857283
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1859371
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1859562
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1861945
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1873307
00:17:45.226  Removing:    /var/run/dpdk/spdk_pid1882490
00:17:45.226  Clean
00:17:47.130   10:45:36 vhost -- common/autotest_common.sh@1453 -- # return 1
00:17:47.130   10:45:36 vhost -- common/autotest_common.sh@1 -- # :
00:17:47.130   10:45:36 vhost -- common/autotest_common.sh@1 -- # exit 1
00:17:47.130    10:45:36  -- spdk/autorun.sh@27 -- $ trap - ERR
00:17:47.130    10:45:36  -- spdk/autorun.sh@27 -- $ print_backtrace
00:17:47.130    10:45:36  -- common/autotest_common.sh@1157 -- $ [[ ehxBET =~ e ]]
00:17:47.130    10:45:36  -- common/autotest_common.sh@1159 -- $ args=('/var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf')
00:17:47.130    10:45:36  -- common/autotest_common.sh@1159 -- $ local args
00:17:47.130    10:45:36  -- common/autotest_common.sh@1161 -- $ xtrace_disable
00:17:47.130    10:45:36  -- common/autotest_common.sh@10 -- $ set +x
00:17:47.130  ========== Backtrace start: ==========
00:17:47.130  
00:17:47.130  in spdk/autorun.sh:27 -> main(["/var/jenkins/workspace/vhost-phy-autotest/autorun-spdk.conf"])
00:17:47.130       ...
00:17:47.130     22  	trap 'timing_finish || exit 1' EXIT
00:17:47.130     23  	
00:17:47.130     24  	# Runs agent scripts
00:17:47.130     25  	$rootdir/autobuild.sh "$conf"
00:17:47.130     26  	if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then
00:17:47.130  => 27  		sudo -E $rootdir/autotest.sh "$conf"
00:17:47.130     28  	fi
00:17:47.130       ...
00:17:47.130  
00:17:47.130  ========== Backtrace end ==========
00:17:47.130    10:45:36  -- common/autotest_common.sh@1198 -- $ return 0
00:17:47.130   10:45:36  -- spdk/autorun.sh@1 -- $ timing_finish
00:17:47.130   10:45:36  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/timing.txt ]]
00:17:47.130   10:45:36  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:17:47.130   10:45:36  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:17:47.130   10:45:36  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/timing.txt
00:17:47.142  [Pipeline] }
00:17:47.157  [Pipeline] // stage
00:17:47.164  [Pipeline] }
00:17:47.181  [Pipeline] // timeout
00:17:47.188  [Pipeline] }
00:17:47.192  ERROR: script returned exit code 1
00:17:47.192  Setting overall build result to FAILURE
00:17:47.206  [Pipeline] // catchError
00:17:47.209  [Pipeline] }
00:17:47.224  [Pipeline] // wrap
00:17:47.230  [Pipeline] }
00:17:47.243  [Pipeline] // catchError
00:17:47.253  [Pipeline] stage
00:17:47.255  [Pipeline] { (Epilogue)
00:17:47.269  [Pipeline] catchError
00:17:47.271  [Pipeline] {
00:17:47.284  [Pipeline] echo
00:17:47.286  Cleanup processes
00:17:47.292  [Pipeline] sh
00:17:47.576  + sudo pgrep -af /var/jenkins/workspace/vhost-phy-autotest/spdk
00:17:47.576  1821253 sudo -E /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008823
00:17:47.576  1821290 bash /var/jenkins/workspace/vhost-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vhost-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732008823
00:17:47.576  1933018 sudo pgrep -af /var/jenkins/workspace/vhost-phy-autotest/spdk
00:17:47.591  [Pipeline] sh
00:17:47.878  ++ sudo pgrep -af /var/jenkins/workspace/vhost-phy-autotest/spdk
00:17:47.878  ++ grep -v 'sudo pgrep'
00:17:47.878  ++ awk '{print $1}'
00:17:47.878  + sudo kill -9 1821253 1821290
00:17:47.891  [Pipeline] sh
00:17:48.248  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:17:51.548  [Pipeline] sh
00:17:51.834  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:17:51.834  Artifacts sizes are good
00:17:51.848  [Pipeline] archiveArtifacts
00:17:51.855  Archiving artifacts
00:17:51.942  [Pipeline] sh
00:17:52.226  + sudo chown -R sys_sgci: /var/jenkins/workspace/vhost-phy-autotest
00:17:52.241  [Pipeline] cleanWs
00:17:52.251  [WS-CLEANUP] Deleting project workspace...
00:17:52.251  [WS-CLEANUP] Deferred wipeout is used...
00:17:52.258  [WS-CLEANUP] done
00:17:52.260  [Pipeline] }
00:17:52.277  [Pipeline] // catchError
00:17:52.288  [Pipeline] echo
00:17:52.290  Tests finished with errors. Please check the logs for more info.
00:17:52.294  [Pipeline] echo
00:17:52.296  Execution node will be rebooted.
00:17:52.311  [Pipeline] build
00:17:52.314  Scheduling project: reset-job
00:17:52.328  [Pipeline] sh
00:17:52.608  + logger -p user.err -t JENKINS-CI
00:17:52.617  [Pipeline] }
00:17:52.631  [Pipeline] // stage
00:17:52.637  [Pipeline] }
00:17:52.652  [Pipeline] // node
00:17:52.658  [Pipeline] End of Pipeline
00:17:52.890  Finished: FAILURE