00:00:00.000  Started by upstream project "autotest-per-patch" build number 132785
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.034  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.035  The recommended git tool is: git
00:00:00.036  using credential 00000000-0000-0000-0000-000000000002
00:00:00.037   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.058  Fetching changes from the remote Git repository
00:00:00.060   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.089  Using shallow fetch with depth 1
00:00:00.089  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.089   > git --version # timeout=10
00:00:00.130   > git --version # 'git version 2.39.2'
00:00:00.130  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.189  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.189   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:03.536   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:03.549   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:03.561  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:03.561   > git config core.sparsecheckout # timeout=10
00:00:03.573   > git read-tree -mu HEAD # timeout=10
00:00:03.590   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:03.620  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:03.621   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:03.776  [Pipeline] Start of Pipeline
00:00:03.785  [Pipeline] library
00:00:03.786  Loading library shm_lib@master
00:00:03.786  Library shm_lib@master is cached. Copying from home.
00:00:03.800  [Pipeline] node
00:00:03.829  Running on WFP17 in /var/jenkins/workspace/vfio-user-phy-autotest
00:00:03.831  [Pipeline] {
00:00:03.838  [Pipeline] catchError
00:00:03.839  [Pipeline] {
00:00:03.849  [Pipeline] wrap
00:00:03.856  [Pipeline] {
00:00:03.862  [Pipeline] stage
00:00:03.863  [Pipeline] { (Prologue)
00:00:04.152  [Pipeline] sh
00:00:05.030  + logger -p user.info -t JENKINS-CI
00:00:05.061  [Pipeline] echo
00:00:05.063  Node: WFP17
00:00:05.071  [Pipeline] sh
00:00:05.409  [Pipeline] setCustomBuildProperty
00:00:05.420  [Pipeline] echo
00:00:05.422  Cleanup processes
00:00:05.427  [Pipeline] sh
00:00:05.716  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:05.716  16800 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:05.730  [Pipeline] sh
00:00:06.022  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:06.022  ++ grep -v 'sudo pgrep'
00:00:06.022  ++ awk '{print $1}'
00:00:06.022  + sudo kill -9
00:00:06.022  + true
00:00:06.037  [Pipeline] cleanWs
00:00:06.048  [WS-CLEANUP] Deleting project workspace...
00:00:06.048  [WS-CLEANUP] Deferred wipeout is used...
00:00:06.061  [WS-CLEANUP] done
00:00:06.065  [Pipeline] setCustomBuildProperty
00:00:06.081  [Pipeline] sh
00:00:06.366  + sudo git config --global --replace-all safe.directory '*'
00:00:06.452  [Pipeline] httpRequest
00:00:08.357  [Pipeline] echo
00:00:08.359  Sorcerer 10.211.164.112 is alive
00:00:08.367  [Pipeline] retry
00:00:08.369  [Pipeline] {
00:00:08.380  [Pipeline] httpRequest
00:00:08.384  HttpMethod: GET
00:00:08.385  URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.386  Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.389  Response Code: HTTP/1.1 200 OK
00:00:08.389  Success: Status code 200 is in the accepted range: 200,404
00:00:08.390  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.605  [Pipeline] }
00:00:08.623  [Pipeline] // retry
00:00:08.630  [Pipeline] sh
00:00:08.919  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.936  [Pipeline] httpRequest
00:00:09.615  [Pipeline] echo
00:00:09.617  Sorcerer 10.211.164.112 is alive
00:00:09.627  [Pipeline] retry
00:00:09.628  [Pipeline] {
00:00:09.646  [Pipeline] httpRequest
00:00:09.651  HttpMethod: GET
00:00:09.652  URL: http://10.211.164.112/packages/spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:00:09.652  Sending request to url: http://10.211.164.112/packages/spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:00:09.655  Response Code: HTTP/1.1 404 Not Found
00:00:09.656  Success: Status code 404 is in the accepted range: 200,404
00:00:09.656  Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:00:09.661  [Pipeline] }
00:00:09.678  [Pipeline] // retry
00:00:09.685  [Pipeline] sh
00:00:09.975  + rm -f spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:00:09.994  [Pipeline] retry
00:00:09.996  [Pipeline] {
00:00:10.019  [Pipeline] checkout
00:00:10.035  The recommended git tool is: NONE
00:00:11.974  using credential 00000000-0000-0000-0000-000000000002
00:00:11.976  Wiping out workspace first.
00:00:11.988  Cloning the remote Git repository
00:00:11.991  Honoring refspec on initial clone
00:00:12.012  Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk
00:00:12.027   > git init /var/jenkins/workspace/vfio-user-phy-autotest/spdk # timeout=10
00:00:12.067  Using reference repository: /var/ci_repos/spdk_multi
00:00:12.069  Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk
00:00:12.069   > git --version # timeout=10
00:00:12.073   > git --version # 'git version 2.45.2'
00:00:12.074  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:12.081  Setting http proxy: proxy-dmz.intel.com:911
00:00:12.081   > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/96/25496/9 +refs/heads/master:refs/remotes/origin/master # timeout=10
00:00:30.327  Avoid second fetch
00:00:30.355  Checking out Revision 04ba75cf7c88f46638fab0e23e6a90606b3c1f71 (FETCH_HEAD)
00:00:30.810  Commit message: "env: extend the page table to support 4-KiB mapping"
00:00:30.811  First time build. Skipping changelog.
00:00:29.786   > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10
00:00:29.792   > git config --add remote.origin.fetch refs/changes/96/25496/9 # timeout=10
00:00:29.795   > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10
00:00:30.329   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:30.346   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:30.362   > git config core.sparsecheckout # timeout=10
00:00:30.367   > git checkout -f 04ba75cf7c88f46638fab0e23e6a90606b3c1f71 # timeout=10
00:00:30.816   > git remote # timeout=10
00:00:30.822   > git submodule init # timeout=10
00:00:30.879   > git submodule sync # timeout=10
00:00:30.922   > git config --get remote.origin.url # timeout=10
00:00:30.930   > git submodule init # timeout=10
00:00:30.967   > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10
00:00:30.972   > git config --get submodule.dpdk.url # timeout=10
00:00:30.976   > git remote # timeout=10
00:00:30.979   > git config --get remote.origin.url # timeout=10
00:00:30.984   > git config -f .gitmodules --get submodule.dpdk.path # timeout=10
00:00:30.996   > git config --get submodule.intel-ipsec-mb.url # timeout=10
00:00:31.002   > git remote # timeout=10
00:00:31.006   > git config --get remote.origin.url # timeout=10
00:00:31.010   > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10
00:00:31.014   > git config --get submodule.isa-l.url # timeout=10
00:00:31.019   > git remote # timeout=10
00:00:31.024   > git config --get remote.origin.url # timeout=10
00:00:31.028   > git config -f .gitmodules --get submodule.isa-l.path # timeout=10
00:00:31.032   > git config --get submodule.ocf.url # timeout=10
00:00:31.044   > git remote # timeout=10
00:00:31.050   > git config --get remote.origin.url # timeout=10
00:00:31.054   > git config -f .gitmodules --get submodule.ocf.path # timeout=10
00:00:31.058   > git config --get submodule.libvfio-user.url # timeout=10
00:00:31.062   > git remote # timeout=10
00:00:31.066   > git config --get remote.origin.url # timeout=10
00:00:31.072   > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10
00:00:31.076   > git config --get submodule.xnvme.url # timeout=10
00:00:31.080   > git remote # timeout=10
00:00:31.084   > git config --get remote.origin.url # timeout=10
00:00:31.088   > git config -f .gitmodules --get submodule.xnvme.path # timeout=10
00:00:31.092   > git config --get submodule.isa-l-crypto.url # timeout=10
00:00:31.096   > git remote # timeout=10
00:00:31.099   > git config --get remote.origin.url # timeout=10
00:00:31.103   > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10
00:00:31.126  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.126  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.126  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.126  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.127  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.127  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.127  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:31.130  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.130   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10
00:00:31.147  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.147  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.147   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10
00:00:31.147   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10
00:00:31.148  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.148  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.148   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10
00:00:31.148   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10
00:00:31.148  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.148  Setting http proxy: proxy-dmz.intel.com:911
00:00:31.148   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10
00:00:31.148   > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10
00:00:44.704  [Pipeline] dir
00:00:44.705  Running in /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:00:44.707  [Pipeline] {
00:00:44.723  [Pipeline] sh
00:00:45.018  ++ nproc
00:00:45.018  + threads=88
00:00:45.018  + git repack -a -d --threads=88
00:00:51.606  + git submodule foreach git repack -a -d --threads=88
00:00:51.606  Entering 'dpdk'
00:00:58.193  Entering 'intel-ipsec-mb'
00:00:58.193  Entering 'isa-l'
00:00:58.462  Entering 'isa-l-crypto'
00:00:58.721  Entering 'libvfio-user'
00:00:58.721  Entering 'ocf'
00:00:58.981  Entering 'xnvme'
00:00:59.552  + find .git -type f -name alternates -print -delete
00:00:59.552  .git/objects/info/alternates
00:00:59.552  .git/modules/ocf/objects/info/alternates
00:00:59.552  .git/modules/libvfio-user/objects/info/alternates
00:00:59.552  .git/modules/dpdk/objects/info/alternates
00:00:59.552  .git/modules/xnvme/objects/info/alternates
00:00:59.552  .git/modules/isa-l/objects/info/alternates
00:00:59.552  .git/modules/intel-ipsec-mb/objects/info/alternates
00:00:59.552  .git/modules/isa-l-crypto/objects/info/alternates
00:00:59.562  [Pipeline] }
00:00:59.579  [Pipeline] // dir
00:00:59.584  [Pipeline] }
00:00:59.601  [Pipeline] // retry
00:00:59.609  [Pipeline] sh
00:00:59.896  + hash pigz
00:00:59.896  + tar -cf spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz -I pigz spdk
00:01:00.481  [Pipeline] retry
00:01:00.483  [Pipeline] {
00:01:00.498  [Pipeline] httpRequest
00:01:00.506  HttpMethod: PUT
00:01:00.506  URL: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:01:00.515  Sending request to url: http://10.211.164.112/cgi-bin/sorcerer.py?group=packages&filename=spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:01:03.138  Response Code: HTTP/1.1 200 OK
00:01:03.146  Success: Status code 200 is in the accepted range: 200
00:01:03.149  [Pipeline] }
00:01:03.166  [Pipeline] // retry
00:01:03.173  [Pipeline] echo
00:01:03.175  
00:01:03.175  Locking
00:01:03.175  Waited 0s for lock
00:01:03.175  File already exists: /storage/packages/spdk_04ba75cf7c88f46638fab0e23e6a90606b3c1f71.tar.gz
00:01:03.175  
00:01:03.179  [Pipeline] sh
00:01:03.465  + git -C spdk log --oneline -n5
00:01:03.465  04ba75cf7 env: extend the page table to support 4-KiB mapping
00:01:03.465  b4f857a04 env: add mem_map_fini and vtophys_fini for cleanup
00:01:03.465  3fe025922 env: handle possible DPDK errors in mem_map_init
00:01:03.465  b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode
00:01:03.465  496bfd677 env: match legacy mem mode config with DPDK
00:01:03.477  [Pipeline] }
00:01:03.495  [Pipeline] // stage
00:01:03.507  [Pipeline] stage
00:01:03.509  [Pipeline] { (Prepare)
00:01:03.540  [Pipeline] writeFile
00:01:03.561  [Pipeline] sh
00:01:03.852  + logger -p user.info -t JENKINS-CI
00:01:03.866  [Pipeline] sh
00:01:04.154  + logger -p user.info -t JENKINS-CI
00:01:04.166  [Pipeline] sh
00:01:04.452  + cat autorun-spdk.conf
00:01:04.452  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:04.452  SPDK_TEST_VFIOUSER_QEMU=1
00:01:04.452  SPDK_RUN_ASAN=1
00:01:04.452  SPDK_RUN_UBSAN=1
00:01:04.452  SPDK_TEST_SMA=1
00:01:04.460  RUN_NIGHTLY=0
00:01:04.465  [Pipeline] readFile
00:01:04.491  [Pipeline] copyArtifacts
00:01:07.460  Copied 1 artifact from "qemu-vfio" build number 34
00:01:07.466  [Pipeline] sh
00:01:07.799  + tar xf qemu-vfio.tar.gz
00:01:09.733  [Pipeline] copyArtifacts
00:01:09.757  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:01:09.761  [Pipeline] sh
00:01:10.052  + sudo mkdir -p /var/spdk/dependencies/vhost
00:01:10.065  [Pipeline] sh
00:01:10.353  + cd /var/spdk/dependencies/vhost
00:01:10.353  + md5sum --quiet -c /var/jenkins/workspace/vfio-user-phy-autotest/spdk_test_image.qcow2.gz.md5
00:01:10.353  md5sum: spdk_test_image.qcow2.gz: No such file or directory
00:01:10.353  spdk_test_image.qcow2.gz: FAILED open or read
00:01:10.353  md5sum: WARNING: 1 listed file could not be read
00:01:10.369  [Pipeline] copyArtifacts
00:02:26.593  Copied 1 artifact from "vagrant-build-vhost" build number 6
00:02:26.598  [Pipeline] sh
00:02:26.890  + sudo mv spdk_test_image.qcow2.gz /var/spdk/dependencies/vhost
00:02:26.902  [Pipeline] sh
00:02:27.187  + sudo rm -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:02:27.201  [Pipeline] withEnv
00:02:27.203  [Pipeline] {
00:02:27.217  [Pipeline] sh
00:02:27.503  + set -ex
00:02:27.503  + [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf ]]
00:02:27.503  + source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:27.503  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:27.503  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:02:27.503  ++ SPDK_RUN_ASAN=1
00:02:27.503  ++ SPDK_RUN_UBSAN=1
00:02:27.503  ++ SPDK_TEST_SMA=1
00:02:27.503  ++ RUN_NIGHTLY=0
00:02:27.503  + case $SPDK_TEST_NVMF_NICS in
00:02:27.503  + DRIVERS=
00:02:27.503  + [[ -n '' ]]
00:02:27.503  + exit 0
00:02:27.512  [Pipeline] }
00:02:27.527  [Pipeline] // withEnv
00:02:27.534  [Pipeline] }
00:02:27.548  [Pipeline] // stage
00:02:27.559  [Pipeline] catchError
00:02:27.561  [Pipeline] {
00:02:27.576  [Pipeline] timeout
00:02:27.576  Timeout set to expire in 35 min
00:02:27.578  [Pipeline] {
00:02:27.593  [Pipeline] stage
00:02:27.596  [Pipeline] { (Tests)
00:02:27.611  [Pipeline] sh
00:02:27.895  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vfio-user-phy-autotest
00:02:27.895  ++ readlink -f /var/jenkins/workspace/vfio-user-phy-autotest
00:02:27.895  + DIR_ROOT=/var/jenkins/workspace/vfio-user-phy-autotest
00:02:27.895  + [[ -n /var/jenkins/workspace/vfio-user-phy-autotest ]]
00:02:27.895  + DIR_SPDK=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:27.895  + DIR_OUTPUT=/var/jenkins/workspace/vfio-user-phy-autotest/output
00:02:27.895  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk ]]
00:02:27.895  + [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:02:27.895  + mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/output
00:02:27.895  + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]]
00:02:27.895  + [[ vfio-user-phy-autotest == pkgdep-* ]]
00:02:27.895  + cd /var/jenkins/workspace/vfio-user-phy-autotest
00:02:27.895  + source /etc/os-release
00:02:27.895  ++ NAME='Fedora Linux'
00:02:27.895  ++ VERSION='39 (Cloud Edition)'
00:02:27.895  ++ ID=fedora
00:02:27.895  ++ VERSION_ID=39
00:02:27.895  ++ VERSION_CODENAME=
00:02:27.895  ++ PLATFORM_ID=platform:f39
00:02:27.895  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:02:27.895  ++ ANSI_COLOR='0;38;2;60;110;180'
00:02:27.895  ++ LOGO=fedora-logo-icon
00:02:27.895  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:02:27.895  ++ HOME_URL=https://fedoraproject.org/
00:02:27.895  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:02:27.895  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:02:27.895  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:02:27.895  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:02:27.895  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:02:27.895  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:02:27.895  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:02:27.895  ++ SUPPORT_END=2024-11-12
00:02:27.895  ++ VARIANT='Cloud Edition'
00:02:27.896  ++ VARIANT_ID=cloud
00:02:27.896  + uname -a
00:02:27.896  Linux spdk-wfp-17 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:02:27.896  + sudo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:02:28.832  Hugepages
00:02:28.832  node     hugesize     free /  total
00:02:28.832  node0   1048576kB        0 /      0
00:02:28.832  node0      2048kB        0 /      0
00:02:28.832  node1   1048576kB        0 /      0
00:02:28.832  node1      2048kB        0 /      0
00:02:28.832  
00:02:28.832  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:28.832  I/OAT                     0000:00:04.0    8086   6f20   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.1    8086   6f21   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.2    8086   6f22   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.3    8086   6f23   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.4    8086   6f24   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.5    8086   6f25   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.6    8086   6f26   0       ioatdma          -          -
00:02:28.832  I/OAT                     0000:00:04.7    8086   6f27   0       ioatdma          -          -
00:02:28.832  NVMe                      0000:0d:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:02:28.832  I/OAT                     0000:80:04.0    8086   6f20   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.1    8086   6f21   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.2    8086   6f22   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.3    8086   6f23   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.4    8086   6f24   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.5    8086   6f25   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.6    8086   6f26   1       ioatdma          -          -
00:02:28.832  I/OAT                     0000:80:04.7    8086   6f27   1       ioatdma          -          -
00:02:28.832  + rm -f /tmp/spdk-ld-path
00:02:28.832  + source autorun-spdk.conf
00:02:28.832  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:28.832  ++ SPDK_TEST_VFIOUSER_QEMU=1
00:02:28.832  ++ SPDK_RUN_ASAN=1
00:02:28.832  ++ SPDK_RUN_UBSAN=1
00:02:28.832  ++ SPDK_TEST_SMA=1
00:02:28.832  ++ RUN_NIGHTLY=0
00:02:28.832  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:28.832  + [[ -n '' ]]
00:02:28.832  + sudo git config --global --add safe.directory /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:28.832  + for M in /var/spdk/build-*-manifest.txt
00:02:28.832  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:02:28.832  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:02:28.832  + for M in /var/spdk/build-*-manifest.txt
00:02:28.832  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:28.832  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:02:28.832  + for M in /var/spdk/build-*-manifest.txt
00:02:28.832  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:28.832  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/
00:02:28.832  ++ uname
00:02:28.832  + [[ Linux == \L\i\n\u\x ]]
00:02:28.832  + sudo dmesg -T
00:02:28.832  + sudo dmesg --clear
00:02:29.092  + dmesg_pid=19560
00:02:29.092  + [[ Fedora Linux == FreeBSD ]]
00:02:29.092  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:29.092  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:29.092  + sudo dmesg -Tw
00:02:29.092  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:29.092  + [[ -x /usr/src/fio-static/fio ]]
00:02:29.092  + export FIO_BIN=/usr/src/fio-static/fio
00:02:29.092  + FIO_BIN=/usr/src/fio-static/fio
00:02:29.092  + [[ /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\f\i\o\-\u\s\e\r\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:29.092  ++ ldd /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64
00:02:29.092  + deps='	linux-vdso.so.1 (0x00007ffd621e8000)
00:02:29.092  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f159454c000)
00:02:29.092  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f1594532000)
00:02:29.092  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f15944fb000)
00:02:29.092  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007f15944a2000)
00:02:29.092  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007f1594495000)
00:02:29.092  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f1594486000)
00:02:29.092  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f15942ac000)
00:02:29.092  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f159424c000)
00:02:29.092  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f1594102000)
00:02:29.092  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f15940e6000)
00:02:29.092  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f15940c4000)
00:02:29.092  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007f15940a2000)
00:02:29.092  	libbpf.so.0 => not found
00:02:29.092  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007f1594061000)
00:02:29.092  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007f159402c000)
00:02:29.092  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f1594025000)
00:02:29.092  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007f159401d000)
00:02:29.092  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007f1593fdb000)
00:02:29.092  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007f1593fab000)
00:02:29.092  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007f1593fa6000)
00:02:29.092  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007f15936eb000)
00:02:29.092  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007f1593523000)
00:02:29.092  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f1593442000)
00:02:29.092  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f159341d000)
00:02:29.092  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f1593239000)
00:02:29.092  	/lib64/ld-linux-x86-64.so.2 (0x00007f15956b0000)
00:02:29.092  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007f159322f000)
00:02:29.092  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007f1593202000)
00:02:29.092  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f15931f8000)
00:02:29.092  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007f15931dc000)
00:02:29.092  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007f1593189000)
00:02:29.092  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f159315c000)
00:02:29.092  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007f159314c000)
00:02:29.092  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f15930b1000)
00:02:29.092  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f159308c000)
00:02:29.092  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f1592ff4000)
00:02:29.092  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007f1592eba000)
00:02:29.092  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007f1592e17000)
00:02:29.092  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007f1592d96000)
00:02:29.092  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007f1592166000)
00:02:29.092  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007f1591c8d000)
00:02:29.092  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f1591a37000)
00:02:29.092  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f1591978000)
00:02:29.092  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f1591945000)
00:02:29.092  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007f1591909000)
00:02:29.092  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007f15918e3000)
00:02:29.092  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007f1591884000)
00:02:29.092  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007f159187c000)
00:02:29.092  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007f1591868000)
00:02:29.092  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f1591857000)
00:02:29.092  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007f15917a3000)
00:02:29.092  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007f1591709000)
00:02:29.092  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007f15916dc000)
00:02:29.092  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007f15916ba000)
00:02:29.092  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007f1591647000)
00:02:29.092  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007f1591633000)
00:02:29.092  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f15915dd000)
00:02:29.092  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007f1591576000)
00:02:29.092  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007f1591564000)
00:02:29.092  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007f1591556000)
00:02:29.092  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007f15913a6000)
00:02:29.092  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f15912cd000)
00:02:29.092  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f15912b3000)
00:02:29.092  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f15912ac000)
00:02:29.092  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f159129c000)
00:02:29.092  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f1591295000)
00:02:29.092  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007f159123d000)
00:02:29.092  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f159121e000)
00:02:29.092  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007f15911f9000)
00:02:29.092  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f15911c0000)'
00:02:29.092  + [[ 	linux-vdso.so.1 (0x00007ffd621e8000)
00:02:29.092  	libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007f159454c000)
00:02:29.092  	libz.so.1 => /usr/lib64/libz.so.1 (0x00007f1594532000)
00:02:29.092  	libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007f15944fb000)
00:02:29.092  	libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007f15944a2000)
00:02:29.092  	libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007f1594495000)
00:02:29.092  	libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f1594486000)
00:02:29.092  	libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007f15942ac000)
00:02:29.092  	libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007f159424c000)
00:02:29.092  	libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007f1594102000)
00:02:29.092  	librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007f15940e6000)
00:02:29.093  	libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007f15940c4000)
00:02:29.093  	libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007f15940a2000)
00:02:29.093  	libbpf.so.0 => not found
00:02:29.093  	libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007f1594061000)
00:02:29.093  	libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007f159402c000)
00:02:29.093  	libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007f1594025000)
00:02:29.093  	liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007f159401d000)
00:02:29.093  	libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007f1593fdb000)
00:02:29.093  	libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007f1593fab000)
00:02:29.093  	libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007f1593fa6000)
00:02:29.093  	librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007f15936eb000)
00:02:29.093  	librados.so.2 => /usr/lib64/librados.so.2 (0x00007f1593523000)
00:02:29.093  	libm.so.6 => /usr/lib64/libm.so.6 (0x00007f1593442000)
00:02:29.093  	libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f159341d000)
00:02:29.093  	libc.so.6 => /usr/lib64/libc.so.6 (0x00007f1593239000)
00:02:29.093  	/lib64/ld-linux-x86-64.so.2 (0x00007f15956b0000)
00:02:29.093  	libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007f159322f000)
00:02:29.093  	libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007f1593202000)
00:02:29.093  	libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007f15931f8000)
00:02:29.093  	libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007f15931dc000)
00:02:29.093  	libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007f1593189000)
00:02:29.093  	libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007f159315c000)
00:02:29.093  	libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007f159314c000)
00:02:29.093  	libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007f15930b1000)
00:02:29.093  	libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007f159308c000)
00:02:29.093  	libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007f1592ff4000)
00:02:29.093  	libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007f1592eba000)
00:02:29.093  	libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007f1592e17000)
00:02:29.093  	libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007f1592d96000)
00:02:29.093  	libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007f1592166000)
00:02:29.093  	libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007f1591c8d000)
00:02:29.093  	libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f1591a37000)
00:02:29.093  	libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f1591978000)
00:02:29.093  	liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007f1591945000)
00:02:29.093  	libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007f1591909000)
00:02:29.093  	libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007f15918e3000)
00:02:29.093  	libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007f1591884000)
00:02:29.093  	libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007f159187c000)
00:02:29.093  	libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007f1591868000)
00:02:29.093  	libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007f1591857000)
00:02:29.093  	libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007f15917a3000)
00:02:29.093  	libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007f1591709000)
00:02:29.093  	libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007f15916dc000)
00:02:29.093  	libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007f15916ba000)
00:02:29.093  	libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007f1591647000)
00:02:29.093  	libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007f1591633000)
00:02:29.093  	libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007f15915dd000)
00:02:29.093  	libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007f1591576000)
00:02:29.093  	liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007f1591564000)
00:02:29.093  	libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007f1591556000)
00:02:29.093  	libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007f15913a6000)
00:02:29.093  	libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007f15912cd000)
00:02:29.093  	libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007f15912b3000)
00:02:29.093  	libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007f15912ac000)
00:02:29.093  	libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007f159129c000)
00:02:29.093  	libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007f1591295000)
00:02:29.093  	libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007f159123d000)
00:02:29.093  	libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007f159121e000)
00:02:29.093  	libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007f15911f9000)
00:02:29.093  	libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007f15911c0000) == *\n\o\t\ \f\o\u\n\d* ]]
00:02:29.093  + unset -v VFIO_QEMU_BIN
00:02:29.093  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:29.093  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:29.093  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:29.093  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:29.093  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:29.093  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:29.093  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:29.093  + spdk/autorun.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:29.093    10:52:45  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:02:29.093   10:52:45  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:29.093    10:52:45  -- vfio-user-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:29.093    10:52:45  -- vfio-user-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_VFIOUSER_QEMU=1
00:02:29.093    10:52:45  -- vfio-user-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_RUN_ASAN=1
00:02:29.093    10:52:45  -- vfio-user-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_RUN_UBSAN=1
00:02:29.093    10:52:45  -- vfio-user-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_SMA=1
00:02:29.093    10:52:45  -- vfio-user-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0
00:02:29.093   10:52:45  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:02:29.093   10:52:45  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:02:29.093     10:52:46  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:02:29.093    10:52:46  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:02:29.093     10:52:46  -- scripts/common.sh@15 -- $ shopt -s extglob
00:02:29.093     10:52:46  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:29.093     10:52:46  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:29.093     10:52:46  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:29.093      10:52:46  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:29.093      10:52:46  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:29.093      10:52:46  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:29.093      10:52:46  -- paths/export.sh@5 -- $ export PATH
00:02:29.093      10:52:46  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:29.093    10:52:46  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:02:29.093      10:52:46  -- common/autobuild_common.sh@493 -- $ date +%s
00:02:29.093     10:52:46  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733737966.XXXXXX
00:02:29.093    10:52:46  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733737966.4snP3q
00:02:29.093    10:52:46  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:02:29.093    10:52:46  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:02:29.093    10:52:46  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/'
00:02:29.093    10:52:46  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp'
00:02:29.093    10:52:46  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:02:29.093     10:52:46  -- common/autobuild_common.sh@509 -- $ get_config_params
00:02:29.093     10:52:46  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:02:29.093     10:52:46  -- common/autotest_common.sh@10 -- $ set +x
00:02:29.093    10:52:46  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto'
00:02:29.093    10:52:46  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:02:29.093    10:52:46  -- pm/common@17 -- $ local monitor
00:02:29.093    10:52:46  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:29.093    10:52:46  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:29.093    10:52:46  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:29.093     10:52:46  -- pm/common@21 -- $ date +%s
00:02:29.093    10:52:46  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:29.093    10:52:46  -- pm/common@25 -- $ sleep 1
00:02:29.093     10:52:46  -- pm/common@21 -- $ date +%s
00:02:29.093     10:52:46  -- pm/common@21 -- $ date +%s
00:02:29.093     10:52:46  -- pm/common@21 -- $ date +%s
00:02:29.093    10:52:46  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737966
00:02:29.093    10:52:46  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737966
00:02:29.093    10:52:46  -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737966
00:02:29.093    10:52:46  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733737966
00:02:29.093  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737966_collect-cpu-load.pm.log
00:02:29.093  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737966_collect-vmstat.pm.log
00:02:29.094  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737966_collect-cpu-temp.pm.log
00:02:29.353  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733737966_collect-bmc-pm.bmc.pm.log
00:02:30.292    10:52:47  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:02:30.292   10:52:47  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:30.292   10:52:47  -- spdk/autobuild.sh@12 -- $ umask 022
00:02:30.292   10:52:47  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:02:30.292   10:52:47  -- spdk/autobuild.sh@16 -- $ date -u
00:02:30.292  Mon Dec  9 09:52:47 AM UTC 2024
00:02:30.292   10:52:47  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:30.292  v25.01-pre-316-g04ba75cf7
00:02:30.292   10:52:47  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:02:30.292   10:52:47  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:02:30.292   10:52:47  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:30.292   10:52:47  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:30.292   10:52:47  -- common/autotest_common.sh@10 -- $ set +x
00:02:30.292  ************************************
00:02:30.292  START TEST asan
00:02:30.292  ************************************
00:02:30.292   10:52:47 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:02:30.292  using asan
00:02:30.292  
00:02:30.292  real	0m0.000s
00:02:30.292  user	0m0.000s
00:02:30.292  sys	0m0.000s
00:02:30.292   10:52:47 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:30.292   10:52:47 asan -- common/autotest_common.sh@10 -- $ set +x
00:02:30.292  ************************************
00:02:30.292  END TEST asan
00:02:30.292  ************************************
00:02:30.292   10:52:47  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:02:30.292   10:52:47  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:02:30.292   10:52:47  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:30.292   10:52:47  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:30.292   10:52:47  -- common/autotest_common.sh@10 -- $ set +x
00:02:30.292  ************************************
00:02:30.292  START TEST ubsan
00:02:30.292  ************************************
00:02:30.292   10:52:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:02:30.292  using ubsan
00:02:30.292  
00:02:30.292  real	0m0.000s
00:02:30.292  user	0m0.000s
00:02:30.292  sys	0m0.000s
00:02:30.292   10:52:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:30.293   10:52:47 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:02:30.293  ************************************
00:02:30.293  END TEST ubsan
00:02:30.293  ************************************
00:02:30.293   10:52:47  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:02:30.293   10:52:47  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:30.293   10:52:47  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:30.293   10:52:47  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:30.293   10:52:47  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:30.293   10:52:47  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:02:30.293   10:52:47  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:02:30.293   10:52:47  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:02:30.293   10:52:47  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto --with-shared
00:02:30.860  Using default SPDK env in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk
00:02:30.860  Using default DPDK in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:02:31.429  Using 'verbs' RDMA provider
00:02:42.790  Configuring ISA-L (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal.log)...done.
00:02:50.903  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:02:50.903  Creating mk/config.mk...done.
00:02:50.903  Creating mk/cc.flags.mk...done.
00:02:50.903  Type 'make' to build.
00:02:50.903   10:53:07  -- spdk/autobuild.sh@70 -- $ run_test make make -j88
00:02:50.903   10:53:07  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:50.903   10:53:07  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:50.903   10:53:07  -- common/autotest_common.sh@10 -- $ set +x
00:02:50.903  ************************************
00:02:50.903  START TEST make
00:02:50.903  ************************************
00:02:50.903   10:53:07 make -- common/autotest_common.sh@1129 -- $ make -j88
00:02:50.904  make[1]: Nothing to be done for 'all'.
00:02:51.165  help2man: can't get `--help' info from ./programs/igzip
00:02:51.165  Try `--no-discard-stderr' if option outputs to stderr
00:02:51.165  make[3]: [Makefile:4944: programs/igzip.1] Error 127 (ignored)
00:02:53.076  The Meson build system
00:02:53.076  Version: 1.5.0
00:02:53.076  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user
00:02:53.076  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:02:53.076  Build type: native build
00:02:53.076  Project name: libvfio-user
00:02:53.076  Project version: 0.0.1
00:02:53.076  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:53.076  C linker for the host machine: cc ld.bfd 2.40-14
00:02:53.076  Host machine cpu family: x86_64
00:02:53.076  Host machine cpu: x86_64
00:02:53.076  Run-time dependency threads found: YES
00:02:53.076  Library dl found: YES
00:02:53.076  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:53.076  Run-time dependency json-c found: YES 0.17
00:02:53.076  Run-time dependency cmocka found: YES 1.1.7
00:02:53.076  Program pytest-3 found: NO
00:02:53.076  Program flake8 found: NO
00:02:53.076  Program misspell-fixer found: NO
00:02:53.076  Program restructuredtext-lint found: NO
00:02:53.076  Program valgrind found: YES (/usr/bin/valgrind)
00:02:53.076  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:53.076  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:53.076  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:53.076  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:02:53.076  Program test-lspci.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:02:53.076  Program test-linkage.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:02:53.076  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:02:53.076  Build targets in project: 8
00:02:53.076  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:02:53.077   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:02:53.077  
00:02:53.077  libvfio-user 0.0.1
00:02:53.077  
00:02:53.077    User defined options
00:02:53.077      buildtype      : debug
00:02:53.077      default_library: shared
00:02:53.077      libdir         : /usr/local/lib
00:02:53.077  
00:02:53.077  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:54.027  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:02:54.027  [1/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:02:54.027  [2/37] Compiling C object samples/lspci.p/lspci.c.o
00:02:54.027  [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:02:54.027  [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:02:54.027  [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:02:54.027  [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:02:54.027  [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:02:54.027  [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:02:54.027  [9/37] Compiling C object samples/null.p/null.c.o
00:02:54.027  [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:02:54.287  [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:02:54.287  [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:02:54.287  [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:02:54.287  [14/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:02:54.287  [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:02:54.287  [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:02:54.287  [17/37] Compiling C object test/unit_tests.p/mocks.c.o
00:02:54.287  [18/37] Compiling C object samples/client.p/client.c.o
00:02:54.287  [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:02:54.287  [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:02:54.287  [21/37] Compiling C object samples/server.p/server.c.o
00:02:54.287  [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:02:54.287  [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:02:54.287  [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:02:54.287  [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:02:54.287  [26/37] Linking target samples/client
00:02:54.287  [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:02:54.287  [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:02:54.287  [29/37] Linking target lib/libvfio-user.so.0.0.1
00:02:54.555  [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:02:54.555  [31/37] Linking target test/unit_tests
00:02:54.555  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:02:54.821  [33/37] Linking target samples/lspci
00:02:54.821  [34/37] Linking target samples/server
00:02:54.821  [35/37] Linking target samples/gpio-pci-idio-16
00:02:54.821  [36/37] Linking target samples/null
00:02:54.821  [37/37] Linking target samples/shadow_ioeventfd_server
00:02:54.821  INFO: autodetecting backend as ninja
00:02:54.821  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:02:54.821  DESTDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug
00:02:55.391  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug'
00:02:55.391  ninja: no work to do.
00:03:27.474  The Meson build system
00:03:27.474  Version: 1.5.0
00:03:27.474  Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk
00:03:27.474  Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp
00:03:27.474  Build type: native build
00:03:27.474  Program cat found: YES (/usr/bin/cat)
00:03:27.474  Project name: DPDK
00:03:27.474  Project version: 24.03.0
00:03:27.474  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:03:27.474  C linker for the host machine: cc ld.bfd 2.40-14
00:03:27.474  Host machine cpu family: x86_64
00:03:27.474  Host machine cpu: x86_64
00:03:27.474  Message: ## Building in Developer Mode ##
00:03:27.474  Program pkg-config found: YES (/usr/bin/pkg-config)
00:03:27.474  Program check-symbols.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:03:27.474  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:03:27.474  Program python3 found: YES (/usr/bin/python3)
00:03:27.474  Program cat found: YES (/usr/bin/cat)
00:03:27.474  Compiler for C supports arguments -march=native: YES 
00:03:27.474  Checking for size of "void *" : 8 
00:03:27.474  Checking for size of "void *" : 8 (cached)
00:03:27.474  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:03:27.474  Library m found: YES
00:03:27.474  Library numa found: YES
00:03:27.474  Has header "numaif.h" : YES 
00:03:27.474  Library fdt found: NO
00:03:27.474  Library execinfo found: NO
00:03:27.474  Has header "execinfo.h" : YES 
00:03:27.474  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:03:27.474  Run-time dependency libarchive found: NO (tried pkgconfig)
00:03:27.474  Run-time dependency libbsd found: NO (tried pkgconfig)
00:03:27.474  Run-time dependency jansson found: NO (tried pkgconfig)
00:03:27.474  Run-time dependency openssl found: YES 3.1.1
00:03:27.474  Run-time dependency libpcap found: YES 1.10.4
00:03:27.474  Has header "pcap.h" with dependency libpcap: YES 
00:03:27.474  Compiler for C supports arguments -Wcast-qual: YES 
00:03:27.474  Compiler for C supports arguments -Wdeprecated: YES 
00:03:27.474  Compiler for C supports arguments -Wformat: YES 
00:03:27.474  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:03:27.474  Compiler for C supports arguments -Wformat-security: NO 
00:03:27.474  Compiler for C supports arguments -Wmissing-declarations: YES 
00:03:27.474  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:03:27.474  Compiler for C supports arguments -Wnested-externs: YES 
00:03:27.474  Compiler for C supports arguments -Wold-style-definition: YES 
00:03:27.474  Compiler for C supports arguments -Wpointer-arith: YES 
00:03:27.474  Compiler for C supports arguments -Wsign-compare: YES 
00:03:27.474  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:03:27.474  Compiler for C supports arguments -Wundef: YES 
00:03:27.474  Compiler for C supports arguments -Wwrite-strings: YES 
00:03:27.474  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:03:27.474  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:03:27.474  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:03:27.474  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:03:27.474  Program objdump found: YES (/usr/bin/objdump)
00:03:27.474  Compiler for C supports arguments -mavx512f: YES 
00:03:27.474  Checking if "AVX512 checking" compiles: YES 
00:03:27.474  Fetching value of define "__SSE4_2__" : 1 
00:03:27.474  Fetching value of define "__AES__" : 1 
00:03:27.474  Fetching value of define "__AVX__" : 1 
00:03:27.474  Fetching value of define "__AVX2__" : 1 
00:03:27.474  Fetching value of define "__AVX512BW__" : (undefined) 
00:03:27.474  Fetching value of define "__AVX512CD__" : (undefined) 
00:03:27.474  Fetching value of define "__AVX512DQ__" : (undefined) 
00:03:27.474  Fetching value of define "__AVX512F__" : (undefined) 
00:03:27.474  Fetching value of define "__AVX512VL__" : (undefined) 
00:03:27.474  Fetching value of define "__PCLMUL__" : 1 
00:03:27.474  Fetching value of define "__RDRND__" : 1 
00:03:27.474  Fetching value of define "__RDSEED__" : 1 
00:03:27.474  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:03:27.474  Fetching value of define "__znver1__" : (undefined) 
00:03:27.474  Fetching value of define "__znver2__" : (undefined) 
00:03:27.474  Fetching value of define "__znver3__" : (undefined) 
00:03:27.474  Fetching value of define "__znver4__" : (undefined) 
00:03:27.474  Library asan found: YES
00:03:27.474  Compiler for C supports arguments -Wno-format-truncation: YES 
00:03:27.474  Message: lib/log: Defining dependency "log"
00:03:27.474  Message: lib/kvargs: Defining dependency "kvargs"
00:03:27.474  Message: lib/telemetry: Defining dependency "telemetry"
00:03:27.474  Library rt found: YES
00:03:27.474  Checking for function "getentropy" : NO 
00:03:27.475  Message: lib/eal: Defining dependency "eal"
00:03:27.475  Message: lib/ring: Defining dependency "ring"
00:03:27.475  Message: lib/rcu: Defining dependency "rcu"
00:03:27.475  Message: lib/mempool: Defining dependency "mempool"
00:03:27.475  Message: lib/mbuf: Defining dependency "mbuf"
00:03:27.475  Fetching value of define "__PCLMUL__" : 1 (cached)
00:03:27.475  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:27.475  Compiler for C supports arguments -mpclmul: YES 
00:03:27.475  Compiler for C supports arguments -maes: YES 
00:03:27.475  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:27.475  Compiler for C supports arguments -mavx512bw: YES 
00:03:27.475  Compiler for C supports arguments -mavx512dq: YES 
00:03:27.475  Compiler for C supports arguments -mavx512vl: YES 
00:03:27.475  Compiler for C supports arguments -mvpclmulqdq: YES 
00:03:27.475  Compiler for C supports arguments -mavx2: YES 
00:03:27.475  Compiler for C supports arguments -mavx: YES 
00:03:27.475  Message: lib/net: Defining dependency "net"
00:03:27.475  Message: lib/meter: Defining dependency "meter"
00:03:27.475  Message: lib/ethdev: Defining dependency "ethdev"
00:03:27.475  Message: lib/pci: Defining dependency "pci"
00:03:27.475  Message: lib/cmdline: Defining dependency "cmdline"
00:03:27.475  Message: lib/hash: Defining dependency "hash"
00:03:27.475  Message: lib/timer: Defining dependency "timer"
00:03:27.475  Message: lib/compressdev: Defining dependency "compressdev"
00:03:27.475  Message: lib/cryptodev: Defining dependency "cryptodev"
00:03:27.475  Message: lib/dmadev: Defining dependency "dmadev"
00:03:27.475  Compiler for C supports arguments -Wno-cast-qual: YES 
00:03:27.475  Message: lib/power: Defining dependency "power"
00:03:27.475  Message: lib/reorder: Defining dependency "reorder"
00:03:27.475  Message: lib/security: Defining dependency "security"
00:03:27.475  Has header "linux/userfaultfd.h" : YES 
00:03:27.475  Has header "linux/vduse.h" : YES 
00:03:27.475  Message: lib/vhost: Defining dependency "vhost"
00:03:27.475  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:03:27.475  Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary"
00:03:27.475  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:03:27.475  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:03:27.475  Compiler for C supports arguments -std=c11: YES 
00:03:27.475  Compiler for C supports arguments -Wno-strict-prototypes: YES 
00:03:27.475  Compiler for C supports arguments -D_BSD_SOURCE: YES 
00:03:27.475  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 
00:03:27.475  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 
00:03:27.475  Run-time dependency libmlx5 found: YES 1.24.46.0
00:03:27.475  Run-time dependency libibverbs found: YES 1.14.46.0
00:03:27.475  Library mtcr_ul found: NO
00:03:27.475  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 
00:03:27.475  Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 
00:03:27.475  Configuring mlx5_autoconf.h using configuration
00:03:27.475  Message: drivers/common/mlx5: Defining dependency "common_mlx5"
00:03:27.475  Run-time dependency libcrypto found: YES 3.1.1
00:03:27.475  Library IPSec_MB found: YES
00:03:27.475  Fetching value of define "IMB_VERSION_STR" : "1.5.0" 
00:03:27.475  Message: drivers/common/qat: Defining dependency "common_qat"
00:03:27.475  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:03:27.475  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:03:27.475  Library IPSec_MB found: YES
00:03:27.475  Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached)
00:03:27.475  Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb"
00:03:27.475  Compiler for C supports arguments -std=c11: YES (cached)
00:03:27.475  Compiler for C supports arguments -Wno-strict-prototypes: YES (cached)
00:03:27.475  Compiler for C supports arguments -D_BSD_SOURCE: YES (cached)
00:03:27.475  Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached)
00:03:27.475  Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached)
00:03:27.475  Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5"
00:03:27.475  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:03:27.475  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:03:27.475  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:03:27.475  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:03:27.475  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:03:27.475  Program doxygen found: YES (/usr/local/bin/doxygen)
00:03:27.475  Configuring doxy-api-html.conf using configuration
00:03:27.475  Configuring doxy-api-man.conf using configuration
00:03:27.475  Program mandb found: YES (/usr/bin/mandb)
00:03:27.475  Program sphinx-build found: NO
00:03:27.475  Configuring rte_build_config.h using configuration
00:03:27.475  Message: 
00:03:27.475  =================
00:03:27.475  Applications Enabled
00:03:27.475  =================
00:03:27.475  
00:03:27.475  apps:
00:03:27.475  	
00:03:27.475  
00:03:27.475  Message: 
00:03:27.475  =================
00:03:27.475  Libraries Enabled
00:03:27.475  =================
00:03:27.475  
00:03:27.475  libs:
00:03:27.475  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:03:27.475  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:03:27.475  	cryptodev, dmadev, power, reorder, security, vhost, 
00:03:27.475  
00:03:27.475  Message: 
00:03:27.475  ===============
00:03:27.475  Drivers Enabled
00:03:27.475  ===============
00:03:27.475  
00:03:27.475  common:
00:03:27.475  	mlx5, qat, 
00:03:27.475  bus:
00:03:27.475  	auxiliary, pci, vdev, 
00:03:27.475  mempool:
00:03:27.475  	ring, 
00:03:27.475  dma:
00:03:27.475  	
00:03:27.475  net:
00:03:27.475  	
00:03:27.475  crypto:
00:03:27.475  	ipsec_mb, mlx5, 
00:03:27.475  compress:
00:03:27.475  	
00:03:27.475  vdpa:
00:03:27.475  	
00:03:27.475  
00:03:27.475  Message: 
00:03:27.475  =================
00:03:27.475  Content Skipped
00:03:27.475  =================
00:03:27.475  
00:03:27.475  apps:
00:03:27.475  	dumpcap:	explicitly disabled via build config
00:03:27.475  	graph:	explicitly disabled via build config
00:03:27.475  	pdump:	explicitly disabled via build config
00:03:27.475  	proc-info:	explicitly disabled via build config
00:03:27.475  	test-acl:	explicitly disabled via build config
00:03:27.475  	test-bbdev:	explicitly disabled via build config
00:03:27.475  	test-cmdline:	explicitly disabled via build config
00:03:27.475  	test-compress-perf:	explicitly disabled via build config
00:03:27.475  	test-crypto-perf:	explicitly disabled via build config
00:03:27.475  	test-dma-perf:	explicitly disabled via build config
00:03:27.475  	test-eventdev:	explicitly disabled via build config
00:03:27.475  	test-fib:	explicitly disabled via build config
00:03:27.475  	test-flow-perf:	explicitly disabled via build config
00:03:27.475  	test-gpudev:	explicitly disabled via build config
00:03:27.475  	test-mldev:	explicitly disabled via build config
00:03:27.475  	test-pipeline:	explicitly disabled via build config
00:03:27.475  	test-pmd:	explicitly disabled via build config
00:03:27.475  	test-regex:	explicitly disabled via build config
00:03:27.475  	test-sad:	explicitly disabled via build config
00:03:27.475  	test-security-perf:	explicitly disabled via build config
00:03:27.475  	
00:03:27.475  libs:
00:03:27.475  	argparse:	explicitly disabled via build config
00:03:27.475  	metrics:	explicitly disabled via build config
00:03:27.475  	acl:	explicitly disabled via build config
00:03:27.475  	bbdev:	explicitly disabled via build config
00:03:27.475  	bitratestats:	explicitly disabled via build config
00:03:27.475  	bpf:	explicitly disabled via build config
00:03:27.475  	cfgfile:	explicitly disabled via build config
00:03:27.475  	distributor:	explicitly disabled via build config
00:03:27.475  	efd:	explicitly disabled via build config
00:03:27.475  	eventdev:	explicitly disabled via build config
00:03:27.475  	dispatcher:	explicitly disabled via build config
00:03:27.475  	gpudev:	explicitly disabled via build config
00:03:27.475  	gro:	explicitly disabled via build config
00:03:27.475  	gso:	explicitly disabled via build config
00:03:27.475  	ip_frag:	explicitly disabled via build config
00:03:27.475  	jobstats:	explicitly disabled via build config
00:03:27.475  	latencystats:	explicitly disabled via build config
00:03:27.475  	lpm:	explicitly disabled via build config
00:03:27.475  	member:	explicitly disabled via build config
00:03:27.475  	pcapng:	explicitly disabled via build config
00:03:27.475  	rawdev:	explicitly disabled via build config
00:03:27.475  	regexdev:	explicitly disabled via build config
00:03:27.475  	mldev:	explicitly disabled via build config
00:03:27.475  	rib:	explicitly disabled via build config
00:03:27.475  	sched:	explicitly disabled via build config
00:03:27.475  	stack:	explicitly disabled via build config
00:03:27.475  	ipsec:	explicitly disabled via build config
00:03:27.475  	pdcp:	explicitly disabled via build config
00:03:27.475  	fib:	explicitly disabled via build config
00:03:27.475  	port:	explicitly disabled via build config
00:03:27.475  	pdump:	explicitly disabled via build config
00:03:27.475  	table:	explicitly disabled via build config
00:03:27.475  	pipeline:	explicitly disabled via build config
00:03:27.475  	graph:	explicitly disabled via build config
00:03:27.475  	node:	explicitly disabled via build config
00:03:27.475  	
00:03:27.475  drivers:
00:03:27.475  	common/cpt:	not in enabled drivers build config
00:03:27.475  	common/dpaax:	not in enabled drivers build config
00:03:27.475  	common/iavf:	not in enabled drivers build config
00:03:27.475  	common/idpf:	not in enabled drivers build config
00:03:27.475  	common/ionic:	not in enabled drivers build config
00:03:27.475  	common/mvep:	not in enabled drivers build config
00:03:27.475  	common/octeontx:	not in enabled drivers build config
00:03:27.475  	bus/cdx:	not in enabled drivers build config
00:03:27.475  	bus/dpaa:	not in enabled drivers build config
00:03:27.475  	bus/fslmc:	not in enabled drivers build config
00:03:27.475  	bus/ifpga:	not in enabled drivers build config
00:03:27.475  	bus/platform:	not in enabled drivers build config
00:03:27.475  	bus/uacce:	not in enabled drivers build config
00:03:27.475  	bus/vmbus:	not in enabled drivers build config
00:03:27.475  	common/cnxk:	not in enabled drivers build config
00:03:27.475  	common/nfp:	not in enabled drivers build config
00:03:27.475  	common/nitrox:	not in enabled drivers build config
00:03:27.475  	common/sfc_efx:	not in enabled drivers build config
00:03:27.475  	mempool/bucket:	not in enabled drivers build config
00:03:27.475  	mempool/cnxk:	not in enabled drivers build config
00:03:27.475  	mempool/dpaa:	not in enabled drivers build config
00:03:27.475  	mempool/dpaa2:	not in enabled drivers build config
00:03:27.475  	mempool/octeontx:	not in enabled drivers build config
00:03:27.475  	mempool/stack:	not in enabled drivers build config
00:03:27.475  	dma/cnxk:	not in enabled drivers build config
00:03:27.475  	dma/dpaa:	not in enabled drivers build config
00:03:27.475  	dma/dpaa2:	not in enabled drivers build config
00:03:27.475  	dma/hisilicon:	not in enabled drivers build config
00:03:27.475  	dma/idxd:	not in enabled drivers build config
00:03:27.475  	dma/ioat:	not in enabled drivers build config
00:03:27.475  	dma/skeleton:	not in enabled drivers build config
00:03:27.475  	net/af_packet:	not in enabled drivers build config
00:03:27.475  	net/af_xdp:	not in enabled drivers build config
00:03:27.475  	net/ark:	not in enabled drivers build config
00:03:27.475  	net/atlantic:	not in enabled drivers build config
00:03:27.475  	net/avp:	not in enabled drivers build config
00:03:27.475  	net/axgbe:	not in enabled drivers build config
00:03:27.475  	net/bnx2x:	not in enabled drivers build config
00:03:27.475  	net/bnxt:	not in enabled drivers build config
00:03:27.475  	net/bonding:	not in enabled drivers build config
00:03:27.475  	net/cnxk:	not in enabled drivers build config
00:03:27.475  	net/cpfl:	not in enabled drivers build config
00:03:27.475  	net/cxgbe:	not in enabled drivers build config
00:03:27.475  	net/dpaa:	not in enabled drivers build config
00:03:27.475  	net/dpaa2:	not in enabled drivers build config
00:03:27.475  	net/e1000:	not in enabled drivers build config
00:03:27.475  	net/ena:	not in enabled drivers build config
00:03:27.475  	net/enetc:	not in enabled drivers build config
00:03:27.475  	net/enetfec:	not in enabled drivers build config
00:03:27.475  	net/enic:	not in enabled drivers build config
00:03:27.475  	net/failsafe:	not in enabled drivers build config
00:03:27.475  	net/fm10k:	not in enabled drivers build config
00:03:27.475  	net/gve:	not in enabled drivers build config
00:03:27.475  	net/hinic:	not in enabled drivers build config
00:03:27.475  	net/hns3:	not in enabled drivers build config
00:03:27.475  	net/i40e:	not in enabled drivers build config
00:03:27.475  	net/iavf:	not in enabled drivers build config
00:03:27.475  	net/ice:	not in enabled drivers build config
00:03:27.475  	net/idpf:	not in enabled drivers build config
00:03:27.475  	net/igc:	not in enabled drivers build config
00:03:27.475  	net/ionic:	not in enabled drivers build config
00:03:27.475  	net/ipn3ke:	not in enabled drivers build config
00:03:27.475  	net/ixgbe:	not in enabled drivers build config
00:03:27.475  	net/mana:	not in enabled drivers build config
00:03:27.475  	net/memif:	not in enabled drivers build config
00:03:27.475  	net/mlx4:	not in enabled drivers build config
00:03:27.475  	net/mlx5:	not in enabled drivers build config
00:03:27.475  	net/mvneta:	not in enabled drivers build config
00:03:27.475  	net/mvpp2:	not in enabled drivers build config
00:03:27.475  	net/netvsc:	not in enabled drivers build config
00:03:27.475  	net/nfb:	not in enabled drivers build config
00:03:27.475  	net/nfp:	not in enabled drivers build config
00:03:27.476  	net/ngbe:	not in enabled drivers build config
00:03:27.476  	net/null:	not in enabled drivers build config
00:03:27.476  	net/octeontx:	not in enabled drivers build config
00:03:27.476  	net/octeon_ep:	not in enabled drivers build config
00:03:27.476  	net/pcap:	not in enabled drivers build config
00:03:27.476  	net/pfe:	not in enabled drivers build config
00:03:27.476  	net/qede:	not in enabled drivers build config
00:03:27.476  	net/ring:	not in enabled drivers build config
00:03:27.476  	net/sfc:	not in enabled drivers build config
00:03:27.476  	net/softnic:	not in enabled drivers build config
00:03:27.476  	net/tap:	not in enabled drivers build config
00:03:27.476  	net/thunderx:	not in enabled drivers build config
00:03:27.476  	net/txgbe:	not in enabled drivers build config
00:03:27.476  	net/vdev_netvsc:	not in enabled drivers build config
00:03:27.476  	net/vhost:	not in enabled drivers build config
00:03:27.476  	net/virtio:	not in enabled drivers build config
00:03:27.476  	net/vmxnet3:	not in enabled drivers build config
00:03:27.476  	raw/*:	missing internal dependency, "rawdev"
00:03:27.476  	crypto/armv8:	not in enabled drivers build config
00:03:27.476  	crypto/bcmfs:	not in enabled drivers build config
00:03:27.476  	crypto/caam_jr:	not in enabled drivers build config
00:03:27.476  	crypto/ccp:	not in enabled drivers build config
00:03:27.476  	crypto/cnxk:	not in enabled drivers build config
00:03:27.476  	crypto/dpaa_sec:	not in enabled drivers build config
00:03:27.476  	crypto/dpaa2_sec:	not in enabled drivers build config
00:03:27.476  	crypto/mvsam:	not in enabled drivers build config
00:03:27.476  	crypto/nitrox:	not in enabled drivers build config
00:03:27.476  	crypto/null:	not in enabled drivers build config
00:03:27.476  	crypto/octeontx:	not in enabled drivers build config
00:03:27.476  	crypto/openssl:	not in enabled drivers build config
00:03:27.476  	crypto/scheduler:	not in enabled drivers build config
00:03:27.476  	crypto/uadk:	not in enabled drivers build config
00:03:27.476  	crypto/virtio:	not in enabled drivers build config
00:03:27.476  	compress/isal:	not in enabled drivers build config
00:03:27.476  	compress/mlx5:	not in enabled drivers build config
00:03:27.476  	compress/nitrox:	not in enabled drivers build config
00:03:27.476  	compress/octeontx:	not in enabled drivers build config
00:03:27.476  	compress/zlib:	not in enabled drivers build config
00:03:27.476  	regex/*:	missing internal dependency, "regexdev"
00:03:27.476  	ml/*:	missing internal dependency, "mldev"
00:03:27.476  	vdpa/ifc:	not in enabled drivers build config
00:03:27.476  	vdpa/mlx5:	not in enabled drivers build config
00:03:27.476  	vdpa/nfp:	not in enabled drivers build config
00:03:27.476  	vdpa/sfc:	not in enabled drivers build config
00:03:27.476  	event/*:	missing internal dependency, "eventdev"
00:03:27.476  	baseband/*:	missing internal dependency, "bbdev"
00:03:27.476  	gpu/*:	missing internal dependency, "gpudev"
00:03:27.476  	
00:03:27.476  
00:03:27.476  Build targets in project: 107
00:03:27.476  
00:03:27.476  DPDK 24.03.0
00:03:27.476  
00:03:27.476    User defined options
00:03:27.476      buildtype          : debug
00:03:27.476      default_library    : shared
00:03:27.476      libdir             : lib
00:03:27.476      prefix             : /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build
00:03:27.476      b_sanitize         : address
00:03:27.476      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -fPIC -Werror 
00:03:27.476      c_link_args        : -L/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib
00:03:27.476      cpu_instruction_set: native
00:03:27.476      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:03:27.476      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:03:27.476      enable_docs        : false
00:03:27.476      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb
00:03:27.476      enable_kmods       : false
00:03:27.476      max_lcores         : 128
00:03:27.476      tests              : false
00:03:27.476  
00:03:27.476  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:03:27.476  ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp'
00:03:27.476  [1/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:03:27.476  [2/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:03:27.476  [3/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:03:27.476  [4/363] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:03:27.476  [5/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:03:27.476  [6/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:03:27.476  [7/363] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:03:27.476  [8/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:03:27.476  [9/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:03:27.476  [10/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:03:27.476  [11/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:03:27.476  [12/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:03:27.476  [13/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:03:27.476  [14/363] Linking static target lib/librte_kvargs.a
00:03:27.476  [15/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:03:27.476  [16/363] Compiling C object lib/librte_log.a.p/log_log.c.o
00:03:27.476  [17/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:03:27.476  [18/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:03:27.476  [19/363] Linking static target lib/librte_log.a
00:03:27.476  [20/363] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:03:27.476  [21/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:03:27.476  [22/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:03:27.476  [23/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:03:27.476  [24/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:03:27.476  [25/363] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:03:27.476  [26/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:03:27.476  [27/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:03:27.476  [28/363] Linking static target lib/net/libnet_crc_avx512_lib.a
00:03:27.476  [29/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:03:27.476  [30/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:03:27.476  [31/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:03:27.476  [32/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:03:27.476  [33/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:03:27.476  [34/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:03:27.476  [35/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:03:27.476  [36/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:03:27.476  [37/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:03:27.476  [38/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:03:27.476  [39/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:03:27.476  [40/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:03:27.476  [41/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:03:27.476  [42/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:03:27.476  [43/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:03:27.476  [44/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:03:27.476  [45/363] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:03:27.476  [46/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:03:27.476  [47/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:03:27.476  [48/363] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:03:27.476  [49/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:03:27.476  [50/363] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:03:27.476  [51/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:03:27.476  [52/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:03:27.476  [53/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:03:27.739  [54/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:03:27.739  [55/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:03:27.739  [56/363] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:03:27.739  [57/363] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:03:27.739  [58/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:03:27.739  [59/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:03:27.739  [60/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:03:27.739  [61/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:03:27.739  [62/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:03:27.739  [63/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:03:27.739  [64/363] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:03:27.739  [65/363] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:03:27.739  [66/363] Linking static target lib/librte_pci.a
00:03:27.739  [67/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:03:27.739  [68/363] Linking static target lib/librte_meter.a
00:03:27.739  [69/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:03:27.739  [70/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:03:27.739  [71/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:03:27.739  [72/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:03:27.739  [73/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:03:27.739  [74/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:03:27.739  [75/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:03:27.739  [76/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:03:27.739  [77/363] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:03:27.739  [78/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:03:27.739  [79/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:03:27.739  [80/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:03:27.739  [81/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:03:27.739  [82/363] Linking static target lib/librte_telemetry.a
00:03:27.739  [83/363] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:03:27.739  [84/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:03:27.739  [85/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:03:27.739  [86/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:03:27.739  [87/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:03:27.739  [88/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:03:27.739  [89/363] Linking static target lib/librte_ring.a
00:03:27.739  [90/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:03:27.739  [91/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:03:27.739  [92/363] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:03:27.739  [93/363] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:03:27.739  [94/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:03:27.739  [95/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:03:27.739  [96/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:03:27.739  [97/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:03:27.739  [98/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:03:27.739  [99/363] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:03:27.739  [100/363] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:03:27.739  [101/363] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:03:27.739  [102/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:03:27.739  [103/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:03:27.739  [104/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:03:27.739  [105/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:03:28.007  [106/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:03:28.007  [107/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o
00:03:28.007  [108/363] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:03:28.007  [109/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:03:28.007  [110/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:03:28.007  [111/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:03:28.007  [112/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:03:28.007  [113/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:03:28.007  [114/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:03:28.007  [115/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:03:28.007  [116/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:03:28.007  [117/363] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.007  [118/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:03:28.007  [119/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:03:28.007  [120/363] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:03:28.007  [121/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:03:28.007  [122/363] Linking static target lib/librte_net.a
00:03:28.007  [123/363] Linking static target lib/librte_mempool.a
00:03:28.007  [124/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o
00:03:28.007  [125/363] Linking target lib/librte_log.so.24.1
00:03:28.007  [126/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:03:28.007  [127/363] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.007  [128/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:03:28.007  [129/363] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:03:28.007  [130/363] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.007  [131/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:03:28.007  [132/363] Linking static target lib/librte_rcu.a
00:03:28.269  [133/363] Linking static target lib/librte_eal.a
00:03:28.269  [134/363] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.269  [135/363] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:03:28.269  [136/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:03:28.269  [137/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o
00:03:28.269  [138/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:03:28.269  [139/363] Linking target lib/librte_kvargs.so.24.1
00:03:28.269  [140/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:03:28.531  [141/363] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.531  [142/363] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:03:28.531  [143/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:03:28.531  [144/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:03:28.531  [145/363] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.531  [146/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:03:28.531  [147/363] Linking static target lib/librte_cmdline.a
00:03:28.531  [148/363] Linking target lib/librte_telemetry.so.24.1
00:03:28.531  [149/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:03:28.531  [150/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:03:28.531  [151/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:03:28.531  [152/363] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:03:28.531  [153/363] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:03:28.531  [154/363] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:03:28.531  [155/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:03:28.531  [156/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o
00:03:28.531  [157/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o
00:03:28.531  [158/363] Linking static target drivers/libtmp_rte_bus_auxiliary.a
00:03:28.531  [159/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:03:28.531  [160/363] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.531  [161/363] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:03:28.531  [162/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:03:28.531  [163/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:03:28.531  [164/363] Linking static target lib/librte_timer.a
00:03:28.790  [165/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:03:28.790  [166/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:03:28.790  [167/363] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:03:28.790  [168/363] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:03:28.790  [169/363] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:03:28.790  [170/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:03:28.790  [171/363] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:03:28.790  [172/363] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:03:28.790  [173/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:03:28.790  [174/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:28.790  [175/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:03:28.790  [176/363] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:28.790  [177/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:28.790  [178/363] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:03:28.790  [179/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:03:28.790  [180/363] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:03:28.790  [181/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o
00:03:28.790  [182/363] Linking static target lib/librte_dmadev.a
00:03:28.790  [183/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o
00:03:28.790  [184/363] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:03:28.790  [185/363] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:03:28.790  [186/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:03:28.790  [187/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o
00:03:28.790  [188/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o
00:03:28.790  [189/363] Linking static target lib/librte_power.a
00:03:28.790  [190/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o
00:03:28.790  [191/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o
00:03:28.790  [192/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o
00:03:28.790  [193/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o
00:03:28.790  [194/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o
00:03:28.790  [195/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o
00:03:28.790  [196/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:03:28.790  [197/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o
00:03:28.790  [198/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o
00:03:28.790  [199/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o
00:03:28.790  [200/363] Linking static target lib/librte_compressdev.a
00:03:28.790  [201/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o
00:03:28.790  [202/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o
00:03:28.790  [203/363] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:03:28.790  [204/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:28.790  [205/363] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:28.790  [206/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:03:28.790  [207/363] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:03:28.790  [208/363] Linking static target lib/librte_mbuf.a
00:03:28.790  [209/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o
00:03:29.049  [210/363] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:03:29.049  [211/363] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.049  [212/363] Linking static target lib/librte_reorder.a
00:03:29.049  [213/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o
00:03:29.049  [214/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o
00:03:29.049  [215/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o
00:03:29.049  [216/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o
00:03:29.049  [217/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o
00:03:29.049  [218/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o
00:03:29.049  [219/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o
00:03:29.049  [220/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o
00:03:29.049  [221/363] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:29.049  [222/363] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.049  [223/363] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:29.049  [224/363] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:29.049  [225/363] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command
00:03:29.049  [226/363] Linking static target drivers/librte_bus_vdev.a
00:03:29.049  [227/363] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:03:29.049  [228/363] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o
00:03:29.049  [229/363] Linking static target drivers/librte_bus_auxiliary.a
00:03:29.049  [230/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o
00:03:29.049  [231/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o
00:03:29.049  [232/363] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:29.049  [233/363] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:29.049  [234/363] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:29.049  [235/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o
00:03:29.049  [236/363] Linking static target drivers/librte_bus_pci.a
00:03:29.049  [237/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o
00:03:29.308  [238/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o
00:03:29.308  [239/363] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [240/363] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [241/363] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [242/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:03:29.308  [243/363] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [244/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o
00:03:29.308  [245/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o
00:03:29.308  [246/363] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [247/363] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:03:29.308  [248/363] Linking static target lib/librte_security.a
00:03:29.308  [249/363] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [250/363] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [251/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o
00:03:29.308  [252/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o
00:03:29.308  [253/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o
00:03:29.308  [254/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o
00:03:29.308  [255/363] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.308  [256/363] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:29.308  [257/363] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:29.308  [258/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:03:29.308  [259/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:03:29.568  [260/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o
00:03:29.568  [261/363] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.568  [262/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o
00:03:29.568  [263/363] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:29.568  [264/363] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:29.568  [265/363] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:29.568  [266/363] Linking static target drivers/librte_mempool_ring.a
00:03:29.568  [267/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o
00:03:29.568  [268/363] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.568  [269/363] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:03:29.568  [270/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o
00:03:29.568  [271/363] Linking static target drivers/libtmp_rte_crypto_mlx5.a
00:03:29.568  [272/363] Linking static target lib/librte_hash.a
00:03:29.827  [273/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o
00:03:29.827  [274/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o
00:03:29.827  [275/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o
00:03:29.827  [276/363] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command
00:03:29.827  [277/363] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:03:29.827  [278/363] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o
00:03:29.827  [279/363] Linking static target drivers/librte_crypto_mlx5.a
00:03:29.827  [280/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o
00:03:30.085  [281/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:30.085  [282/363] Linking static target lib/librte_cryptodev.a
00:03:30.085  [283/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o
00:03:30.085  [284/363] Linking static target drivers/libtmp_rte_common_mlx5.a
00:03:30.085  [285/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o
00:03:30.085  [286/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o
00:03:30.085  [287/363] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:03:30.343  [288/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o
00:03:30.343  [289/363] Generating drivers/rte_common_mlx5.pmd.c with a custom command
00:03:30.343  [290/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o
00:03:30.343  [291/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o
00:03:30.343  [292/363] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:03:30.343  [293/363] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o
00:03:30.343  [294/363] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a
00:03:30.343  [295/363] Linking static target drivers/librte_common_mlx5.a
00:03:30.343  [296/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o
00:03:30.601  [297/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:03:30.601  [298/363] Linking static target lib/librte_ethdev.a
00:03:30.601  [299/363] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command
00:03:30.601  [300/363] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:03:30.601  [301/363] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o
00:03:30.601  [302/363] Linking static target drivers/librte_crypto_ipsec_mb.a
00:03:30.601  [303/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o
00:03:31.168  [304/363] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.734  [305/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:32.668  [306/363] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.235  [307/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o
00:03:33.235  [308/363] Linking static target drivers/libtmp_rte_common_qat.a
00:03:33.493  [309/363] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.493  [310/363] Generating drivers/rte_common_qat.pmd.c with a custom command
00:03:33.493  [311/363] Linking target lib/librte_eal.so.24.1
00:03:33.759  [312/363] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o
00:03:33.759  [313/363] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o
00:03:33.759  [314/363] Linking static target drivers/librte_common_qat.a
00:03:33.759  [315/363] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:03:33.759  [316/363] Linking target lib/librte_pci.so.24.1
00:03:33.759  [317/363] Linking target lib/librte_ring.so.24.1
00:03:33.759  [318/363] Linking target lib/librte_meter.so.24.1
00:03:33.759  [319/363] Linking target lib/librte_timer.so.24.1
00:03:33.759  [320/363] Linking target drivers/librte_bus_vdev.so.24.1
00:03:33.759  [321/363] Linking target lib/librte_dmadev.so.24.1
00:03:33.759  [322/363] Linking target drivers/librte_bus_auxiliary.so.24.1
00:03:34.025  [323/363] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:03:34.025  [324/363] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols
00:03:34.025  [325/363] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols
00:03:34.025  [326/363] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:03:34.025  [327/363] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:03:34.025  [328/363] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:03:34.025  [329/363] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:03:34.025  [330/363] Linking target lib/librte_rcu.so.24.1
00:03:34.025  [331/363] Linking target lib/librte_mempool.so.24.1
00:03:34.025  [332/363] Linking target drivers/librte_bus_pci.so.24.1
00:03:34.025  [333/363] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:03:34.025  [334/363] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:03:34.025  [335/363] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols
00:03:34.025  [336/363] Linking target lib/librte_mbuf.so.24.1
00:03:34.025  [337/363] Linking target drivers/librte_mempool_ring.so.24.1
00:03:34.285  [338/363] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:03:34.285  [339/363] Linking target lib/librte_compressdev.so.24.1
00:03:34.285  [340/363] Linking target lib/librte_reorder.so.24.1
00:03:34.285  [341/363] Linking target lib/librte_net.so.24.1
00:03:34.285  [342/363] Linking target lib/librte_cryptodev.so.24.1
00:03:34.285  [343/363] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:03:34.285  [344/363] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols
00:03:34.285  [345/363] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:03:34.544  [346/363] Linking target lib/librte_security.so.24.1
00:03:34.544  [347/363] Linking target lib/librte_hash.so.24.1
00:03:34.544  [348/363] Linking target lib/librte_cmdline.so.24.1
00:03:34.544  [349/363] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.544  [350/363] Linking target lib/librte_ethdev.so.24.1
00:03:34.544  [351/363] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols
00:03:34.544  [352/363] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:03:34.544  [353/363] Linking target drivers/librte_common_mlx5.so.24.1
00:03:34.544  [354/363] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:03:34.544  [355/363] Linking target lib/librte_power.so.24.1
00:03:34.804  [356/363] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols
00:03:34.804  [357/363] Linking target drivers/librte_crypto_ipsec_mb.so.24.1
00:03:34.804  [358/363] Linking target drivers/librte_common_qat.so.24.1
00:03:34.804  [359/363] Linking target drivers/librte_crypto_mlx5.so.24.1
00:03:35.063  [360/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:35.063  [361/363] Linking static target lib/librte_vhost.a
00:03:36.002  [362/363] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.002  [363/363] Linking target lib/librte_vhost.so.24.1
00:03:36.002  INFO: autodetecting backend as ninja
00:03:36.002  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp -j 88
00:03:36.939    CC lib/ut/ut.o
00:03:36.939    CC lib/log/log.o
00:03:36.939    CC lib/log/log_flags.o
00:03:36.939    CC lib/log/log_deprecated.o
00:03:36.939    CC lib/ut_mock/mock.o
00:03:36.939    LIB libspdk_ut.a
00:03:36.939    LIB libspdk_ut_mock.a
00:03:36.939    SO libspdk_ut.so.2.0
00:03:36.939    LIB libspdk_log.a
00:03:36.939    SO libspdk_ut_mock.so.6.0
00:03:37.197    SO libspdk_log.so.7.1
00:03:37.197    SYMLINK libspdk_ut.so
00:03:37.197    SYMLINK libspdk_ut_mock.so
00:03:37.197    SYMLINK libspdk_log.so
00:03:37.197    CC lib/dma/dma.o
00:03:37.197    CC lib/util/base64.o
00:03:37.197    CC lib/util/bit_array.o
00:03:37.197    CC lib/util/cpuset.o
00:03:37.197    CC lib/util/crc16.o
00:03:37.197    CC lib/util/crc32.o
00:03:37.197    CC lib/util/crc32c.o
00:03:37.197    CXX lib/trace_parser/trace.o
00:03:37.197    CC lib/util/crc32_ieee.o
00:03:37.197    CC lib/util/dif.o
00:03:37.197    CC lib/util/fd.o
00:03:37.197    CC lib/util/fd_group.o
00:03:37.197    CC lib/util/crc64.o
00:03:37.197    CC lib/util/file.o
00:03:37.197    CC lib/util/hexlify.o
00:03:37.197    CC lib/util/iov.o
00:03:37.197    CC lib/ioat/ioat.o
00:03:37.197    CC lib/util/math.o
00:03:37.197    CC lib/util/net.o
00:03:37.197    CC lib/util/pipe.o
00:03:37.197    CC lib/util/strerror_tls.o
00:03:37.197    CC lib/util/string.o
00:03:37.197    CC lib/util/uuid.o
00:03:37.197    CC lib/util/xor.o
00:03:37.197    CC lib/util/zipf.o
00:03:37.197    CC lib/util/md5.o
00:03:37.455    CC lib/vfio_user/host/vfio_user_pci.o
00:03:37.455    CC lib/vfio_user/host/vfio_user.o
00:03:37.455    LIB libspdk_dma.a
00:03:37.455    SO libspdk_dma.so.5.0
00:03:37.714    SYMLINK libspdk_dma.so
00:03:37.714    LIB libspdk_ioat.a
00:03:37.714    SO libspdk_ioat.so.7.0
00:03:37.714    LIB libspdk_vfio_user.a
00:03:37.714    SO libspdk_vfio_user.so.5.0
00:03:37.714    SYMLINK libspdk_ioat.so
00:03:37.714    SYMLINK libspdk_vfio_user.so
00:03:37.973    LIB libspdk_util.a
00:03:37.973    SO libspdk_util.so.10.1
00:03:38.237    SYMLINK libspdk_util.so
00:03:38.237    CC lib/json/json_parse.o
00:03:38.237    CC lib/json/json_util.o
00:03:38.237    CC lib/json/json_write.o
00:03:38.237    CC lib/conf/conf.o
00:03:38.237    CC lib/rdma_utils/rdma_utils.o
00:03:38.237    CC lib/vmd/vmd.o
00:03:38.237    CC lib/vmd/led.o
00:03:38.237    CC lib/env_dpdk/env.o
00:03:38.237    CC lib/env_dpdk/memory.o
00:03:38.237    CC lib/env_dpdk/pci.o
00:03:38.237    CC lib/env_dpdk/init.o
00:03:38.237    CC lib/env_dpdk/threads.o
00:03:38.237    CC lib/env_dpdk/pci_ioat.o
00:03:38.237    CC lib/env_dpdk/pci_virtio.o
00:03:38.237    CC lib/env_dpdk/pci_vmd.o
00:03:38.237    CC lib/idxd/idxd.o
00:03:38.237    CC lib/env_dpdk/pci_idxd.o
00:03:38.237    CC lib/idxd/idxd_user.o
00:03:38.237    CC lib/env_dpdk/pci_event.o
00:03:38.237    CC lib/env_dpdk/sigbus_handler.o
00:03:38.237    CC lib/idxd/idxd_kernel.o
00:03:38.237    CC lib/env_dpdk/pci_dpdk.o
00:03:38.237    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:38.237    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:38.496    LIB libspdk_conf.a
00:03:38.496    SO libspdk_conf.so.6.0
00:03:38.756    LIB libspdk_rdma_utils.a
00:03:38.756    LIB libspdk_json.a
00:03:38.756    SYMLINK libspdk_conf.so
00:03:38.756    SO libspdk_rdma_utils.so.1.0
00:03:38.756    SO libspdk_json.so.6.0
00:03:38.756    SYMLINK libspdk_rdma_utils.so
00:03:38.756    SYMLINK libspdk_json.so
00:03:38.756    CC lib/rdma_provider/common.o
00:03:38.756    CC lib/rdma_provider/rdma_provider_verbs.o
00:03:38.756    CC lib/jsonrpc/jsonrpc_server.o
00:03:38.756    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:38.756    CC lib/jsonrpc/jsonrpc_client.o
00:03:38.756    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:39.016    LIB libspdk_idxd.a
00:03:39.016    LIB libspdk_rdma_provider.a
00:03:39.016    SO libspdk_rdma_provider.so.7.0
00:03:39.016    SO libspdk_idxd.so.12.1
00:03:39.016    LIB libspdk_vmd.a
00:03:39.016    SO libspdk_vmd.so.6.0
00:03:39.016    LIB libspdk_jsonrpc.a
00:03:39.016    SYMLINK libspdk_rdma_provider.so
00:03:39.274    LIB libspdk_trace_parser.a
00:03:39.274    SYMLINK libspdk_idxd.so
00:03:39.274    SO libspdk_jsonrpc.so.6.0
00:03:39.274    SO libspdk_trace_parser.so.6.0
00:03:39.274    SYMLINK libspdk_vmd.so
00:03:39.274    SYMLINK libspdk_jsonrpc.so
00:03:39.274    SYMLINK libspdk_trace_parser.so
00:03:39.274    CC lib/rpc/rpc.o
00:03:39.533    LIB libspdk_rpc.a
00:03:39.533    SO libspdk_rpc.so.6.0
00:03:39.533    SYMLINK libspdk_rpc.so
00:03:39.792    CC lib/trace/trace.o
00:03:39.792    CC lib/keyring/keyring.o
00:03:39.792    CC lib/keyring/keyring_rpc.o
00:03:39.792    CC lib/trace/trace_flags.o
00:03:39.792    CC lib/trace/trace_rpc.o
00:03:39.792    CC lib/notify/notify.o
00:03:39.792    CC lib/notify/notify_rpc.o
00:03:39.792    LIB libspdk_env_dpdk.a
00:03:39.792    LIB libspdk_notify.a
00:03:39.792    SO libspdk_env_dpdk.so.15.1
00:03:39.792    SO libspdk_notify.so.6.0
00:03:40.050    LIB libspdk_keyring.a
00:03:40.050    SYMLINK libspdk_notify.so
00:03:40.050    LIB libspdk_trace.a
00:03:40.050    SO libspdk_keyring.so.2.0
00:03:40.051    SO libspdk_trace.so.11.0
00:03:40.051    SYMLINK libspdk_env_dpdk.so
00:03:40.051    SYMLINK libspdk_keyring.so
00:03:40.051    SYMLINK libspdk_trace.so
00:03:40.310    CC lib/sock/sock.o
00:03:40.310    CC lib/sock/sock_rpc.o
00:03:40.310    CC lib/thread/thread.o
00:03:40.310    CC lib/thread/iobuf.o
00:03:40.568    LIB libspdk_sock.a
00:03:40.568    SO libspdk_sock.so.10.0
00:03:40.568    SYMLINK libspdk_sock.so
00:03:40.827    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:40.827    CC lib/nvme/nvme_ctrlr.o
00:03:40.827    CC lib/nvme/nvme_fabric.o
00:03:40.827    CC lib/nvme/nvme_ns.o
00:03:40.827    CC lib/nvme/nvme_pcie_common.o
00:03:40.827    CC lib/nvme/nvme_ns_cmd.o
00:03:40.827    CC lib/nvme/nvme_pcie.o
00:03:40.827    CC lib/nvme/nvme_qpair.o
00:03:40.827    CC lib/nvme/nvme.o
00:03:40.827    CC lib/nvme/nvme_quirks.o
00:03:40.827    CC lib/nvme/nvme_transport.o
00:03:40.827    CC lib/nvme/nvme_discovery.o
00:03:40.827    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:40.827    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:40.827    CC lib/nvme/nvme_opal.o
00:03:40.827    CC lib/nvme/nvme_tcp.o
00:03:40.827    CC lib/nvme/nvme_io_msg.o
00:03:40.827    CC lib/nvme/nvme_poll_group.o
00:03:40.827    CC lib/nvme/nvme_zns.o
00:03:40.827    CC lib/nvme/nvme_stubs.o
00:03:40.827    CC lib/nvme/nvme_auth.o
00:03:40.827    CC lib/nvme/nvme_cuse.o
00:03:40.827    CC lib/nvme/nvme_vfio_user.o
00:03:40.827    CC lib/nvme/nvme_rdma.o
00:03:41.765    LIB libspdk_thread.a
00:03:42.023    SO libspdk_thread.so.11.0
00:03:42.023    SYMLINK libspdk_thread.so
00:03:42.023    CC lib/vfu_tgt/tgt_endpoint.o
00:03:42.023    CC lib/blob/blobstore.o
00:03:42.023    CC lib/vfu_tgt/tgt_rpc.o
00:03:42.023    CC lib/blob/request.o
00:03:42.023    CC lib/blob/zeroes.o
00:03:42.023    CC lib/blob/blob_bs_dev.o
00:03:42.023    CC lib/fsdev/fsdev.o
00:03:42.023    CC lib/virtio/virtio.o
00:03:42.023    CC lib/virtio/virtio_vhost_user.o
00:03:42.023    CC lib/fsdev/fsdev_io.o
00:03:42.023    CC lib/fsdev/fsdev_rpc.o
00:03:42.023    CC lib/virtio/virtio_vfio_user.o
00:03:42.023    CC lib/virtio/virtio_pci.o
00:03:42.023    CC lib/accel/accel.o
00:03:42.023    CC lib/accel/accel_rpc.o
00:03:42.023    CC lib/init/json_config.o
00:03:42.023    CC lib/accel/accel_sw.o
00:03:42.023    CC lib/init/subsystem.o
00:03:42.023    CC lib/init/subsystem_rpc.o
00:03:42.023    CC lib/init/rpc.o
00:03:42.590    LIB libspdk_init.a
00:03:42.590    SO libspdk_init.so.6.0
00:03:42.590    LIB libspdk_vfu_tgt.a
00:03:42.590    SYMLINK libspdk_init.so
00:03:42.590    LIB libspdk_virtio.a
00:03:42.590    SO libspdk_vfu_tgt.so.3.0
00:03:42.590    SO libspdk_virtio.so.7.0
00:03:42.590    SYMLINK libspdk_vfu_tgt.so
00:03:42.590    SYMLINK libspdk_virtio.so
00:03:42.590    CC lib/event/app.o
00:03:42.590    CC lib/event/reactor.o
00:03:42.590    CC lib/event/log_rpc.o
00:03:42.590    CC lib/event/app_rpc.o
00:03:42.590    CC lib/event/scheduler_static.o
00:03:42.850    LIB libspdk_fsdev.a
00:03:42.850    SO libspdk_fsdev.so.2.0
00:03:42.850    SYMLINK libspdk_fsdev.so
00:03:43.109    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:03:43.109    LIB libspdk_event.a
00:03:43.109    SO libspdk_event.so.14.0
00:03:43.368    SYMLINK libspdk_event.so
00:03:43.368    LIB libspdk_accel.a
00:03:43.368    LIB libspdk_nvme.a
00:03:43.368    SO libspdk_accel.so.16.0
00:03:43.368    SYMLINK libspdk_accel.so
00:03:43.368    SO libspdk_nvme.so.15.0
00:03:43.626    CC lib/bdev/bdev.o
00:03:43.626    CC lib/bdev/bdev_rpc.o
00:03:43.627    CC lib/bdev/part.o
00:03:43.627    CC lib/bdev/bdev_zone.o
00:03:43.627    CC lib/bdev/scsi_nvme.o
00:03:43.627    SYMLINK libspdk_nvme.so
00:03:43.887    LIB libspdk_fuse_dispatcher.a
00:03:43.887    SO libspdk_fuse_dispatcher.so.1.0
00:03:43.887    SYMLINK libspdk_fuse_dispatcher.so
00:03:45.813    LIB libspdk_blob.a
00:03:45.813    SO libspdk_blob.so.12.0
00:03:45.813    SYMLINK libspdk_blob.so
00:03:45.813    CC lib/blobfs/blobfs.o
00:03:45.813    CC lib/blobfs/tree.o
00:03:45.813    CC lib/lvol/lvol.o
00:03:46.751    LIB libspdk_bdev.a
00:03:46.751    LIB libspdk_blobfs.a
00:03:46.751    SO libspdk_blobfs.so.11.0
00:03:46.751    SO libspdk_bdev.so.17.0
00:03:46.751    SYMLINK libspdk_blobfs.so
00:03:46.751    SYMLINK libspdk_bdev.so
00:03:46.751    LIB libspdk_lvol.a
00:03:46.751    SO libspdk_lvol.so.11.0
00:03:46.751    SYMLINK libspdk_lvol.so
00:03:47.013    CC lib/scsi/dev.o
00:03:47.013    CC lib/scsi/lun.o
00:03:47.013    CC lib/scsi/port.o
00:03:47.013    CC lib/scsi/scsi.o
00:03:47.013    CC lib/ftl/ftl_core.o
00:03:47.013    CC lib/scsi/scsi_bdev.o
00:03:47.013    CC lib/scsi/scsi_pr.o
00:03:47.013    CC lib/ftl/ftl_init.o
00:03:47.013    CC lib/scsi/scsi_rpc.o
00:03:47.013    CC lib/scsi/task.o
00:03:47.013    CC lib/ftl/ftl_debug.o
00:03:47.013    CC lib/ftl/ftl_layout.o
00:03:47.013    CC lib/ftl/ftl_io.o
00:03:47.013    CC lib/ftl/ftl_sb.o
00:03:47.013    CC lib/ublk/ublk.o
00:03:47.013    CC lib/ftl/ftl_l2p.o
00:03:47.013    CC lib/ftl/ftl_l2p_flat.o
00:03:47.013    CC lib/ublk/ublk_rpc.o
00:03:47.013    CC lib/ftl/ftl_nv_cache.o
00:03:47.013    CC lib/ftl/ftl_band.o
00:03:47.013    CC lib/ftl/ftl_band_ops.o
00:03:47.013    CC lib/ftl/ftl_writer.o
00:03:47.013    CC lib/ftl/ftl_rq.o
00:03:47.013    CC lib/ftl/ftl_reloc.o
00:03:47.013    CC lib/ftl/ftl_l2p_cache.o
00:03:47.013    CC lib/ftl/ftl_p2l.o
00:03:47.013    CC lib/ftl/ftl_p2l_log.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt.o
00:03:47.013    CC lib/nvmf/ctrlr.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:03:47.013    CC lib/nvmf/ctrlr_discovery.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:03:47.013    CC lib/nvmf/ctrlr_bdev.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_startup.o
00:03:47.013    CC lib/nvmf/subsystem.o
00:03:47.013    CC lib/nbd/nbd.o
00:03:47.013    CC lib/nvmf/nvmf.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_md.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_misc.o
00:03:47.013    CC lib/nbd/nbd_rpc.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:03:47.013    CC lib/nvmf/nvmf_rpc.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:03:47.013    CC lib/nvmf/tcp.o
00:03:47.013    CC lib/nvmf/transport.o
00:03:47.013    CC lib/nvmf/stubs.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_band.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:03:47.013    CC lib/nvmf/mdns_server.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:03:47.013    CC lib/nvmf/auth.o
00:03:47.013    CC lib/nvmf/vfio_user.o
00:03:47.013    CC lib/nvmf/rdma.o
00:03:47.013    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:03:47.013    CC lib/ftl/utils/ftl_md.o
00:03:47.013    CC lib/ftl/utils/ftl_conf.o
00:03:47.013    CC lib/ftl/utils/ftl_mempool.o
00:03:47.013    CC lib/ftl/utils/ftl_bitmap.o
00:03:47.013    CC lib/ftl/utils/ftl_property.o
00:03:47.013    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:03:47.013    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:03:47.013    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:03:47.013    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:03:47.013    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:03:47.013    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:03:47.013    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:03:47.013    CC lib/ftl/upgrade/ftl_sb_v3.o
00:03:47.013    CC lib/ftl/upgrade/ftl_sb_v5.o
00:03:47.013    CC lib/ftl/nvc/ftl_nvc_dev.o
00:03:47.013    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:03:47.013    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:03:47.013    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:03:47.013    CC lib/ftl/base/ftl_base_dev.o
00:03:47.013    CC lib/ftl/base/ftl_base_bdev.o
00:03:47.013    CC lib/ftl/ftl_trace.o
00:03:47.947    LIB libspdk_nbd.a
00:03:47.947    SO libspdk_nbd.so.7.0
00:03:47.947    SYMLINK libspdk_nbd.so
00:03:47.947    LIB libspdk_ublk.a
00:03:47.947    LIB libspdk_scsi.a
00:03:47.947    SO libspdk_ublk.so.3.0
00:03:47.947    SO libspdk_scsi.so.9.0
00:03:47.947    SYMLINK libspdk_ublk.so
00:03:47.947    SYMLINK libspdk_scsi.so
00:03:48.208    CC lib/iscsi/init_grp.o
00:03:48.208    CC lib/iscsi/conn.o
00:03:48.208    CC lib/iscsi/iscsi.o
00:03:48.208    CC lib/iscsi/param.o
00:03:48.208    CC lib/iscsi/portal_grp.o
00:03:48.208    CC lib/iscsi/tgt_node.o
00:03:48.208    CC lib/iscsi/iscsi_subsystem.o
00:03:48.208    CC lib/iscsi/iscsi_rpc.o
00:03:48.208    CC lib/iscsi/task.o
00:03:48.208    CC lib/vhost/vhost.o
00:03:48.208    CC lib/vhost/vhost_rpc.o
00:03:48.208    CC lib/vhost/vhost_scsi.o
00:03:48.208    CC lib/vhost/vhost_blk.o
00:03:48.208    CC lib/vhost/rte_vhost_user.o
00:03:48.466    LIB libspdk_ftl.a
00:03:48.466    SO libspdk_ftl.so.9.0
00:03:48.725    SYMLINK libspdk_ftl.so
00:03:49.292    LIB libspdk_vhost.a
00:03:49.292    SO libspdk_vhost.so.8.0
00:03:49.292    SYMLINK libspdk_vhost.so
00:03:49.863    LIB libspdk_nvmf.a
00:03:49.863    LIB libspdk_iscsi.a
00:03:49.863    SO libspdk_iscsi.so.8.0
00:03:49.863    SO libspdk_nvmf.so.20.0
00:03:49.863    SYMLINK libspdk_iscsi.so
00:03:50.122    SYMLINK libspdk_nvmf.so
00:03:50.122    CC module/env_dpdk/env_dpdk_rpc.o
00:03:50.381    CC module/vfu_device/vfu_virtio.o
00:03:50.381    CC module/vfu_device/vfu_virtio_blk.o
00:03:50.381    CC module/vfu_device/vfu_virtio_scsi.o
00:03:50.381    CC module/vfu_device/vfu_virtio_rpc.o
00:03:50.381    CC module/vfu_device/vfu_virtio_fs.o
00:03:50.381    CC module/accel/ioat/accel_ioat.o
00:03:50.381    CC module/accel/error/accel_error.o
00:03:50.381    CC module/accel/ioat/accel_ioat_rpc.o
00:03:50.381    CC module/accel/error/accel_error_rpc.o
00:03:50.381    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o
00:03:50.381    CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o
00:03:50.381    CC module/accel/iaa/accel_iaa.o
00:03:50.381    CC module/accel/iaa/accel_iaa_rpc.o
00:03:50.381    CC module/sock/posix/posix.o
00:03:50.381    CC module/keyring/linux/keyring.o
00:03:50.381    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:50.381    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:03:50.381    CC module/scheduler/gscheduler/gscheduler.o
00:03:50.381    CC module/keyring/linux/keyring_rpc.o
00:03:50.381    CC module/blob/bdev/blob_bdev.o
00:03:50.381    CC module/fsdev/aio/fsdev_aio.o
00:03:50.381    CC module/fsdev/aio/fsdev_aio_rpc.o
00:03:50.381    CC module/fsdev/aio/linux_aio_mgr.o
00:03:50.381    CC module/accel/dsa/accel_dsa.o
00:03:50.381    CC module/keyring/file/keyring.o
00:03:50.381    CC module/accel/dsa/accel_dsa_rpc.o
00:03:50.381    CC module/keyring/file/keyring_rpc.o
00:03:50.381    LIB libspdk_env_dpdk_rpc.a
00:03:50.381    SO libspdk_env_dpdk_rpc.so.6.0
00:03:50.381    SYMLINK libspdk_env_dpdk_rpc.so
00:03:50.640    LIB libspdk_keyring_linux.a
00:03:50.640    LIB libspdk_scheduler_gscheduler.a
00:03:50.640    LIB libspdk_keyring_file.a
00:03:50.640    LIB libspdk_scheduler_dpdk_governor.a
00:03:50.640    SO libspdk_keyring_linux.so.1.0
00:03:50.640    SO libspdk_scheduler_gscheduler.so.4.0
00:03:50.640    SO libspdk_keyring_file.so.2.0
00:03:50.640    SO libspdk_scheduler_dpdk_governor.so.4.0
00:03:50.640    LIB libspdk_accel_ioat.a
00:03:50.640    LIB libspdk_accel_error.a
00:03:50.640    LIB libspdk_accel_iaa.a
00:03:50.640    LIB libspdk_scheduler_dynamic.a
00:03:50.640    SO libspdk_accel_ioat.so.6.0
00:03:50.640    SYMLINK libspdk_keyring_linux.so
00:03:50.640    SYMLINK libspdk_scheduler_gscheduler.so
00:03:50.640    SO libspdk_accel_error.so.2.0
00:03:50.640    SO libspdk_accel_iaa.so.3.0
00:03:50.640    SO libspdk_scheduler_dynamic.so.4.0
00:03:50.640    SYMLINK libspdk_scheduler_dpdk_governor.so
00:03:50.640    SYMLINK libspdk_keyring_file.so
00:03:50.640    SYMLINK libspdk_accel_ioat.so
00:03:50.640    SYMLINK libspdk_scheduler_dynamic.so
00:03:50.640    SYMLINK libspdk_accel_error.so
00:03:50.640    SYMLINK libspdk_accel_iaa.so
00:03:50.640    LIB libspdk_blob_bdev.a
00:03:50.640    LIB libspdk_accel_dsa.a
00:03:50.640    SO libspdk_blob_bdev.so.12.0
00:03:50.640    SO libspdk_accel_dsa.so.5.0
00:03:50.640    SYMLINK libspdk_blob_bdev.so
00:03:50.640    SYMLINK libspdk_accel_dsa.so
00:03:50.901    CC module/bdev/gpt/gpt.o
00:03:50.901    CC module/bdev/gpt/vbdev_gpt.o
00:03:50.901    CC module/bdev/delay/vbdev_delay.o
00:03:50.901    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:50.901    CC module/bdev/null/bdev_null.o
00:03:50.901    CC module/blobfs/bdev/blobfs_bdev.o
00:03:50.901    CC module/bdev/null/bdev_null_rpc.o
00:03:50.901    CC module/bdev/lvol/vbdev_lvol.o
00:03:50.901    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:50.901    CC module/bdev/error/vbdev_error.o
00:03:50.901    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:50.901    CC module/bdev/malloc/bdev_malloc.o
00:03:50.901    CC module/bdev/raid/bdev_raid.o
00:03:50.901    CC module/bdev/error/vbdev_error_rpc.o
00:03:50.901    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:50.901    CC module/bdev/passthru/vbdev_passthru.o
00:03:50.901    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:50.901    CC module/bdev/raid/bdev_raid_rpc.o
00:03:50.901    CC module/bdev/raid/bdev_raid_sb.o
00:03:50.901    CC module/bdev/raid/raid0.o
00:03:50.901    CC module/bdev/aio/bdev_aio.o
00:03:50.901    CC module/bdev/split/vbdev_split.o
00:03:50.901    CC module/bdev/raid/raid1.o
00:03:50.901    CC module/bdev/aio/bdev_aio_rpc.o
00:03:50.901    CC module/bdev/split/vbdev_split_rpc.o
00:03:50.901    CC module/bdev/raid/concat.o
00:03:50.901    CC module/bdev/nvme/bdev_nvme.o
00:03:50.901    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:50.901    CC module/bdev/crypto/vbdev_crypto.o
00:03:50.901    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:50.901    CC module/bdev/crypto/vbdev_crypto_rpc.o
00:03:50.901    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:50.901    CC module/bdev/ftl/bdev_ftl.o
00:03:50.901    CC module/bdev/ftl/bdev_ftl_rpc.o
00:03:50.901    CC module/bdev/nvme/nvme_rpc.o
00:03:50.901    CC module/bdev/iscsi/bdev_iscsi.o
00:03:50.901    CC module/bdev/virtio/bdev_virtio_scsi.o
00:03:50.901    CC module/bdev/nvme/bdev_mdns_client.o
00:03:50.901    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:03:50.901    CC module/bdev/virtio/bdev_virtio_rpc.o
00:03:50.901    CC module/bdev/virtio/bdev_virtio_blk.o
00:03:50.901    CC module/bdev/nvme/vbdev_opal.o
00:03:50.901    CC module/bdev/nvme/vbdev_opal_rpc.o
00:03:50.901    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:03:51.161    LIB libspdk_vfu_device.a
00:03:51.161    SO libspdk_vfu_device.so.3.0
00:03:51.161    LIB libspdk_fsdev_aio.a
00:03:51.161    SO libspdk_fsdev_aio.so.1.0
00:03:51.161    LIB libspdk_sock_posix.a
00:03:51.161    SYMLINK libspdk_vfu_device.so
00:03:51.161    SO libspdk_sock_posix.so.6.0
00:03:51.420    SYMLINK libspdk_fsdev_aio.so
00:03:51.420    LIB libspdk_blobfs_bdev.a
00:03:51.420    SO libspdk_blobfs_bdev.so.6.0
00:03:51.420    SYMLINK libspdk_sock_posix.so
00:03:51.420    LIB libspdk_bdev_null.a
00:03:51.420    LIB libspdk_bdev_gpt.a
00:03:51.420    LIB libspdk_bdev_error.a
00:03:51.420    SO libspdk_bdev_null.so.6.0
00:03:51.420    SO libspdk_bdev_gpt.so.6.0
00:03:51.420    SYMLINK libspdk_blobfs_bdev.so
00:03:51.420    SO libspdk_bdev_error.so.6.0
00:03:51.420    LIB libspdk_bdev_passthru.a
00:03:51.420    LIB libspdk_bdev_split.a
00:03:51.420    SYMLINK libspdk_bdev_gpt.so
00:03:51.420    SYMLINK libspdk_bdev_null.so
00:03:51.420    SO libspdk_bdev_passthru.so.6.0
00:03:51.420    SYMLINK libspdk_bdev_error.so
00:03:51.420    SO libspdk_bdev_split.so.6.0
00:03:51.420    LIB libspdk_bdev_crypto.a
00:03:51.420    LIB libspdk_bdev_delay.a
00:03:51.420    LIB libspdk_bdev_ftl.a
00:03:51.420    SO libspdk_bdev_crypto.so.6.0
00:03:51.420    LIB libspdk_bdev_malloc.a
00:03:51.420    SO libspdk_bdev_delay.so.6.0
00:03:51.420    SO libspdk_bdev_ftl.so.6.0
00:03:51.420    SYMLINK libspdk_bdev_passthru.so
00:03:51.420    SYMLINK libspdk_bdev_split.so
00:03:51.420    LIB libspdk_bdev_iscsi.a
00:03:51.420    SO libspdk_bdev_malloc.so.6.0
00:03:51.681    LIB libspdk_bdev_zone_block.a
00:03:51.681    SO libspdk_bdev_iscsi.so.6.0
00:03:51.681    SYMLINK libspdk_bdev_crypto.so
00:03:51.681    SYMLINK libspdk_bdev_delay.so
00:03:51.681    SYMLINK libspdk_bdev_ftl.so
00:03:51.681    SO libspdk_bdev_zone_block.so.6.0
00:03:51.681    LIB libspdk_bdev_aio.a
00:03:51.681    SYMLINK libspdk_bdev_malloc.so
00:03:51.681    SO libspdk_bdev_aio.so.6.0
00:03:51.681    SYMLINK libspdk_bdev_iscsi.so
00:03:51.681    SYMLINK libspdk_bdev_zone_block.so
00:03:51.681    LIB libspdk_bdev_lvol.a
00:03:51.681    SYMLINK libspdk_bdev_aio.so
00:03:51.681    SO libspdk_bdev_lvol.so.6.0
00:03:51.681    SYMLINK libspdk_bdev_lvol.so
00:03:51.681    LIB libspdk_bdev_virtio.a
00:03:51.681    SO libspdk_bdev_virtio.so.6.0
00:03:51.941    SYMLINK libspdk_bdev_virtio.so
00:03:51.941    LIB libspdk_accel_dpdk_cryptodev.a
00:03:51.941    SO libspdk_accel_dpdk_cryptodev.so.3.0
00:03:52.201    SYMLINK libspdk_accel_dpdk_cryptodev.so
00:03:52.201    LIB libspdk_bdev_raid.a
00:03:52.201    SO libspdk_bdev_raid.so.6.0
00:03:52.201    SYMLINK libspdk_bdev_raid.so
00:03:54.107    LIB libspdk_bdev_nvme.a
00:03:54.107    SO libspdk_bdev_nvme.so.7.1
00:03:54.107    SYMLINK libspdk_bdev_nvme.so
00:03:54.107    CC module/event/subsystems/iobuf/iobuf.o
00:03:54.107    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:54.107    CC module/event/subsystems/sock/sock.o
00:03:54.107    CC module/event/subsystems/fsdev/fsdev.o
00:03:54.107    CC module/event/subsystems/keyring/keyring.o
00:03:54.107    CC module/event/subsystems/scheduler/scheduler.o
00:03:54.107    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:03:54.107    CC module/event/subsystems/vmd/vmd.o
00:03:54.107    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:54.107    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:03:54.365    LIB libspdk_event_keyring.a
00:03:54.365    LIB libspdk_event_fsdev.a
00:03:54.365    LIB libspdk_event_vhost_blk.a
00:03:54.365    LIB libspdk_event_scheduler.a
00:03:54.365    LIB libspdk_event_vfu_tgt.a
00:03:54.365    LIB libspdk_event_vmd.a
00:03:54.365    LIB libspdk_event_iobuf.a
00:03:54.366    SO libspdk_event_fsdev.so.1.0
00:03:54.366    SO libspdk_event_keyring.so.1.0
00:03:54.366    SO libspdk_event_scheduler.so.4.0
00:03:54.366    LIB libspdk_event_sock.a
00:03:54.366    SO libspdk_event_vhost_blk.so.3.0
00:03:54.366    SO libspdk_event_vfu_tgt.so.3.0
00:03:54.366    SO libspdk_event_vmd.so.6.0
00:03:54.366    SO libspdk_event_iobuf.so.3.0
00:03:54.366    SO libspdk_event_sock.so.5.0
00:03:54.366    SYMLINK libspdk_event_fsdev.so
00:03:54.366    SYMLINK libspdk_event_keyring.so
00:03:54.366    SYMLINK libspdk_event_scheduler.so
00:03:54.366    SYMLINK libspdk_event_vhost_blk.so
00:03:54.366    SYMLINK libspdk_event_vfu_tgt.so
00:03:54.366    SYMLINK libspdk_event_vmd.so
00:03:54.366    SYMLINK libspdk_event_iobuf.so
00:03:54.366    SYMLINK libspdk_event_sock.so
00:03:54.624    CC module/event/subsystems/accel/accel.o
00:03:54.624    LIB libspdk_event_accel.a
00:03:54.624    SO libspdk_event_accel.so.6.0
00:03:54.885    SYMLINK libspdk_event_accel.so
00:03:54.885    CC module/event/subsystems/bdev/bdev.o
00:03:55.143    LIB libspdk_event_bdev.a
00:03:55.143    SO libspdk_event_bdev.so.6.0
00:03:55.143    SYMLINK libspdk_event_bdev.so
00:03:55.143    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:55.143    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:55.143    CC module/event/subsystems/scsi/scsi.o
00:03:55.143    CC module/event/subsystems/ublk/ublk.o
00:03:55.143    CC module/event/subsystems/nbd/nbd.o
00:03:55.402    LIB libspdk_event_scsi.a
00:03:55.402    LIB libspdk_event_nbd.a
00:03:55.402    SO libspdk_event_scsi.so.6.0
00:03:55.402    SO libspdk_event_nbd.so.6.0
00:03:55.402    LIB libspdk_event_ublk.a
00:03:55.402    SO libspdk_event_ublk.so.3.0
00:03:55.402    LIB libspdk_event_nvmf.a
00:03:55.402    SYMLINK libspdk_event_scsi.so
00:03:55.402    SYMLINK libspdk_event_nbd.so
00:03:55.402    SO libspdk_event_nvmf.so.6.0
00:03:55.402    SYMLINK libspdk_event_ublk.so
00:03:55.402    SYMLINK libspdk_event_nvmf.so
00:03:55.662    CC module/event/subsystems/iscsi/iscsi.o
00:03:55.662    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:03:55.662    LIB libspdk_event_vhost_scsi.a
00:03:55.662    LIB libspdk_event_iscsi.a
00:03:55.662    SO libspdk_event_vhost_scsi.so.3.0
00:03:55.662    SO libspdk_event_iscsi.so.6.0
00:03:55.662    SYMLINK libspdk_event_vhost_scsi.so
00:03:55.921    SYMLINK libspdk_event_iscsi.so
00:03:55.921    SO libspdk.so.6.0
00:03:55.921    SYMLINK libspdk.so
00:03:55.921    CC app/trace_record/trace_record.o
00:03:55.921    CXX app/trace/trace.o
00:03:55.921    CC app/spdk_lspci/spdk_lspci.o
00:03:55.922    CC app/spdk_top/spdk_top.o
00:03:55.922    CC app/spdk_nvme_perf/perf.o
00:03:55.922    TEST_HEADER include/spdk/accel.h
00:03:55.922    CC app/spdk_nvme_discover/discovery_aer.o
00:03:55.922    TEST_HEADER include/spdk/accel_module.h
00:03:55.922    CC test/rpc_client/rpc_client_test.o
00:03:55.922    TEST_HEADER include/spdk/assert.h
00:03:55.922    TEST_HEADER include/spdk/barrier.h
00:03:55.922    CC app/spdk_nvme_identify/identify.o
00:03:55.922    TEST_HEADER include/spdk/base64.h
00:03:55.922    TEST_HEADER include/spdk/bdev.h
00:03:55.922    TEST_HEADER include/spdk/bdev_module.h
00:03:55.922    TEST_HEADER include/spdk/bdev_zone.h
00:03:55.922    TEST_HEADER include/spdk/bit_array.h
00:03:55.922    TEST_HEADER include/spdk/bit_pool.h
00:03:55.922    TEST_HEADER include/spdk/blob_bdev.h
00:03:55.922    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:55.922    TEST_HEADER include/spdk/blobfs.h
00:03:55.922    TEST_HEADER include/spdk/blob.h
00:03:55.922    TEST_HEADER include/spdk/conf.h
00:03:55.922    TEST_HEADER include/spdk/config.h
00:03:56.186    TEST_HEADER include/spdk/cpuset.h
00:03:56.186    TEST_HEADER include/spdk/crc16.h
00:03:56.186    TEST_HEADER include/spdk/crc32.h
00:03:56.186    TEST_HEADER include/spdk/crc64.h
00:03:56.186    TEST_HEADER include/spdk/dif.h
00:03:56.186    TEST_HEADER include/spdk/dma.h
00:03:56.186    TEST_HEADER include/spdk/endian.h
00:03:56.186    TEST_HEADER include/spdk/env_dpdk.h
00:03:56.186    TEST_HEADER include/spdk/env.h
00:03:56.186    TEST_HEADER include/spdk/event.h
00:03:56.186    TEST_HEADER include/spdk/fd_group.h
00:03:56.186    TEST_HEADER include/spdk/fd.h
00:03:56.186    TEST_HEADER include/spdk/file.h
00:03:56.186    TEST_HEADER include/spdk/fsdev.h
00:03:56.186    TEST_HEADER include/spdk/fsdev_module.h
00:03:56.186    TEST_HEADER include/spdk/ftl.h
00:03:56.186    TEST_HEADER include/spdk/fuse_dispatcher.h
00:03:56.186    TEST_HEADER include/spdk/gpt_spec.h
00:03:56.186    TEST_HEADER include/spdk/hexlify.h
00:03:56.186    TEST_HEADER include/spdk/histogram_data.h
00:03:56.186    TEST_HEADER include/spdk/idxd.h
00:03:56.186    TEST_HEADER include/spdk/idxd_spec.h
00:03:56.186    TEST_HEADER include/spdk/init.h
00:03:56.186    TEST_HEADER include/spdk/ioat.h
00:03:56.186    TEST_HEADER include/spdk/ioat_spec.h
00:03:56.186    CC examples/interrupt_tgt/interrupt_tgt.o
00:03:56.186    TEST_HEADER include/spdk/iscsi_spec.h
00:03:56.186    TEST_HEADER include/spdk/json.h
00:03:56.186    TEST_HEADER include/spdk/jsonrpc.h
00:03:56.186    TEST_HEADER include/spdk/keyring.h
00:03:56.186    TEST_HEADER include/spdk/keyring_module.h
00:03:56.186    TEST_HEADER include/spdk/likely.h
00:03:56.186    TEST_HEADER include/spdk/lvol.h
00:03:56.186    TEST_HEADER include/spdk/log.h
00:03:56.186    TEST_HEADER include/spdk/md5.h
00:03:56.186    TEST_HEADER include/spdk/memory.h
00:03:56.186    TEST_HEADER include/spdk/mmio.h
00:03:56.186    TEST_HEADER include/spdk/nbd.h
00:03:56.186    TEST_HEADER include/spdk/net.h
00:03:56.186    TEST_HEADER include/spdk/notify.h
00:03:56.186    TEST_HEADER include/spdk/nvme.h
00:03:56.186    TEST_HEADER include/spdk/nvme_intel.h
00:03:56.186    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:56.186    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:56.186    TEST_HEADER include/spdk/nvme_spec.h
00:03:56.186    TEST_HEADER include/spdk/nvme_zns.h
00:03:56.186    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:56.186    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:56.186    CC app/spdk_dd/spdk_dd.o
00:03:56.186    TEST_HEADER include/spdk/nvmf.h
00:03:56.186    TEST_HEADER include/spdk/nvmf_spec.h
00:03:56.186    TEST_HEADER include/spdk/nvmf_transport.h
00:03:56.186    CC app/nvmf_tgt/nvmf_main.o
00:03:56.186    TEST_HEADER include/spdk/opal.h
00:03:56.186    TEST_HEADER include/spdk/opal_spec.h
00:03:56.186    TEST_HEADER include/spdk/pci_ids.h
00:03:56.186    TEST_HEADER include/spdk/pipe.h
00:03:56.186    TEST_HEADER include/spdk/queue.h
00:03:56.186    TEST_HEADER include/spdk/reduce.h
00:03:56.186    TEST_HEADER include/spdk/rpc.h
00:03:56.186    TEST_HEADER include/spdk/scsi.h
00:03:56.186    CC app/iscsi_tgt/iscsi_tgt.o
00:03:56.186    TEST_HEADER include/spdk/scheduler.h
00:03:56.186    TEST_HEADER include/spdk/scsi_spec.h
00:03:56.186    TEST_HEADER include/spdk/sock.h
00:03:56.186    TEST_HEADER include/spdk/stdinc.h
00:03:56.186    TEST_HEADER include/spdk/string.h
00:03:56.186    TEST_HEADER include/spdk/thread.h
00:03:56.186    TEST_HEADER include/spdk/trace.h
00:03:56.186    TEST_HEADER include/spdk/trace_parser.h
00:03:56.186    TEST_HEADER include/spdk/tree.h
00:03:56.186    TEST_HEADER include/spdk/ublk.h
00:03:56.186    TEST_HEADER include/spdk/util.h
00:03:56.186    TEST_HEADER include/spdk/uuid.h
00:03:56.186    TEST_HEADER include/spdk/version.h
00:03:56.186    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:56.186    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:56.186    TEST_HEADER include/spdk/vhost.h
00:03:56.186    TEST_HEADER include/spdk/vmd.h
00:03:56.186    TEST_HEADER include/spdk/xor.h
00:03:56.186    TEST_HEADER include/spdk/zipf.h
00:03:56.186    CXX test/cpp_headers/accel.o
00:03:56.186    CXX test/cpp_headers/accel_module.o
00:03:56.186    CXX test/cpp_headers/assert.o
00:03:56.186    CXX test/cpp_headers/barrier.o
00:03:56.186    CXX test/cpp_headers/base64.o
00:03:56.186    CXX test/cpp_headers/bdev.o
00:03:56.186    CXX test/cpp_headers/bdev_module.o
00:03:56.186    CXX test/cpp_headers/bdev_zone.o
00:03:56.186    CXX test/cpp_headers/bit_array.o
00:03:56.186    CXX test/cpp_headers/bit_pool.o
00:03:56.186    CXX test/cpp_headers/blob_bdev.o
00:03:56.186    CXX test/cpp_headers/blobfs_bdev.o
00:03:56.186    CXX test/cpp_headers/blobfs.o
00:03:56.186    CXX test/cpp_headers/blob.o
00:03:56.186    CXX test/cpp_headers/conf.o
00:03:56.186    CXX test/cpp_headers/config.o
00:03:56.186    CXX test/cpp_headers/crc16.o
00:03:56.186    CXX test/cpp_headers/cpuset.o
00:03:56.186    CXX test/cpp_headers/crc32.o
00:03:56.186    CXX test/cpp_headers/crc64.o
00:03:56.186    CXX test/cpp_headers/dma.o
00:03:56.186    CXX test/cpp_headers/dif.o
00:03:56.186    CXX test/cpp_headers/endian.o
00:03:56.186    CXX test/cpp_headers/env.o
00:03:56.186    CXX test/cpp_headers/env_dpdk.o
00:03:56.186    CXX test/cpp_headers/fd_group.o
00:03:56.186    CXX test/cpp_headers/event.o
00:03:56.186    CXX test/cpp_headers/fd.o
00:03:56.186    CC app/spdk_tgt/spdk_tgt.o
00:03:56.186    CXX test/cpp_headers/fsdev.o
00:03:56.186    CXX test/cpp_headers/file.o
00:03:56.186    CXX test/cpp_headers/fsdev_module.o
00:03:56.186    CXX test/cpp_headers/ftl.o
00:03:56.186    CXX test/cpp_headers/hexlify.o
00:03:56.186    CXX test/cpp_headers/gpt_spec.o
00:03:56.186    CXX test/cpp_headers/fuse_dispatcher.o
00:03:56.186    CXX test/cpp_headers/idxd_spec.o
00:03:56.186    CXX test/cpp_headers/histogram_data.o
00:03:56.186    CXX test/cpp_headers/idxd.o
00:03:56.186    CXX test/cpp_headers/init.o
00:03:56.186    CXX test/cpp_headers/ioat.o
00:03:56.186    CXX test/cpp_headers/ioat_spec.o
00:03:56.186    CXX test/cpp_headers/json.o
00:03:56.186    CXX test/cpp_headers/iscsi_spec.o
00:03:56.186    CXX test/cpp_headers/jsonrpc.o
00:03:56.186    CXX test/cpp_headers/keyring.o
00:03:56.186    CXX test/cpp_headers/likely.o
00:03:56.186    CXX test/cpp_headers/keyring_module.o
00:03:56.186    CXX test/cpp_headers/log.o
00:03:56.186    CXX test/cpp_headers/lvol.o
00:03:56.186    CXX test/cpp_headers/md5.o
00:03:56.186    CXX test/cpp_headers/memory.o
00:03:56.186    CXX test/cpp_headers/mmio.o
00:03:56.186    CXX test/cpp_headers/nbd.o
00:03:56.187    CXX test/cpp_headers/notify.o
00:03:56.187    CXX test/cpp_headers/net.o
00:03:56.187    CXX test/cpp_headers/nvme.o
00:03:56.187    CXX test/cpp_headers/nvme_intel.o
00:03:56.187    CXX test/cpp_headers/nvme_ocssd.o
00:03:56.187    CC examples/util/zipf/zipf.o
00:03:56.187    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:56.187    CC examples/ioat/verify/verify.o
00:03:56.187    CC examples/ioat/perf/perf.o
00:03:56.187    CC test/app/jsoncat/jsoncat.o
00:03:56.187    CC test/app/histogram_perf/histogram_perf.o
00:03:56.187    CC test/thread/poller_perf/poller_perf.o
00:03:56.187    CC test/app/stub/stub.o
00:03:56.187    CC test/env/memory/memory_ut.o
00:03:56.187    CC test/env/pci/pci_ut.o
00:03:56.187    CC app/fio/nvme/fio_plugin.o
00:03:56.187    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:56.187    CC test/dma/test_dma/test_dma.o
00:03:56.187    CC test/env/vtophys/vtophys.o
00:03:56.459    CC test/app/bdev_svc/bdev_svc.o
00:03:56.459    CC app/fio/bdev/fio_plugin.o
00:03:56.459    LINK spdk_lspci
00:03:56.459    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:56.724    CC test/env/mem_callbacks/mem_callbacks.o
00:03:56.724    LINK interrupt_tgt
00:03:56.724    LINK rpc_client_test
00:03:56.724    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:56.724    LINK spdk_nvme_discover
00:03:56.724    LINK nvmf_tgt
00:03:56.724    LINK jsoncat
00:03:56.724    LINK zipf
00:03:56.724    LINK spdk_trace_record
00:03:56.724    LINK poller_perf
00:03:56.724    LINK iscsi_tgt
00:03:56.725    LINK histogram_perf
00:03:56.991    CXX test/cpp_headers/nvme_spec.o
00:03:56.991    LINK vtophys
00:03:56.991    LINK spdk_tgt
00:03:56.991    CXX test/cpp_headers/nvme_zns.o
00:03:56.991    CXX test/cpp_headers/nvmf_cmd.o
00:03:56.991    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:56.991    CXX test/cpp_headers/nvmf.o
00:03:56.991    LINK stub
00:03:56.991    CXX test/cpp_headers/nvmf_spec.o
00:03:56.991    CXX test/cpp_headers/nvmf_transport.o
00:03:56.991    CXX test/cpp_headers/opal.o
00:03:56.991    CXX test/cpp_headers/opal_spec.o
00:03:56.991    CXX test/cpp_headers/pci_ids.o
00:03:56.991    CXX test/cpp_headers/pipe.o
00:03:56.991    CXX test/cpp_headers/queue.o
00:03:56.991    CXX test/cpp_headers/reduce.o
00:03:56.991    CXX test/cpp_headers/rpc.o
00:03:56.991    CXX test/cpp_headers/scheduler.o
00:03:56.991    CXX test/cpp_headers/scsi.o
00:03:56.991    CXX test/cpp_headers/scsi_spec.o
00:03:56.991    LINK env_dpdk_post_init
00:03:56.991    CXX test/cpp_headers/sock.o
00:03:56.991    CXX test/cpp_headers/stdinc.o
00:03:56.991    CXX test/cpp_headers/string.o
00:03:56.991    CXX test/cpp_headers/trace.o
00:03:56.991    CXX test/cpp_headers/thread.o
00:03:56.991    CXX test/cpp_headers/trace_parser.o
00:03:56.991    CXX test/cpp_headers/tree.o
00:03:56.991    CXX test/cpp_headers/ublk.o
00:03:56.991    CXX test/cpp_headers/util.o
00:03:56.991    CXX test/cpp_headers/uuid.o
00:03:56.991    CXX test/cpp_headers/version.o
00:03:56.991    CXX test/cpp_headers/vfio_user_pci.o
00:03:56.991    CXX test/cpp_headers/vfio_user_spec.o
00:03:56.991    CXX test/cpp_headers/vhost.o
00:03:56.991    CXX test/cpp_headers/vmd.o
00:03:56.991    CXX test/cpp_headers/xor.o
00:03:56.991    CXX test/cpp_headers/zipf.o
00:03:56.991    LINK bdev_svc
00:03:56.991    LINK verify
00:03:56.991    LINK ioat_perf
00:03:56.991    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:03:56.991    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:03:57.250    LINK spdk_dd
00:03:57.250    LINK spdk_trace
00:03:57.250    LINK pci_ut
00:03:57.250    LINK test_dma
00:03:57.509    LINK nvme_fuzz
00:03:57.509    CC examples/idxd/perf/perf.o
00:03:57.509    CC test/event/event_perf/event_perf.o
00:03:57.509    CC examples/sock/hello_world/hello_sock.o
00:03:57.509    CC examples/vmd/lsvmd/lsvmd.o
00:03:57.509    CC test/event/reactor_perf/reactor_perf.o
00:03:57.509    CC examples/vmd/led/led.o
00:03:57.509    CC test/event/reactor/reactor.o
00:03:57.509    CC test/event/app_repeat/app_repeat.o
00:03:57.509    CC test/event/scheduler/scheduler.o
00:03:57.509    CC examples/thread/thread/thread_ex.o
00:03:57.509    LINK lsvmd
00:03:57.509    LINK event_perf
00:03:57.509    LINK reactor
00:03:57.509    LINK reactor_perf
00:03:57.509    LINK led
00:03:57.509    CC app/vhost/vhost.o
00:03:57.509    LINK app_repeat
00:03:57.768    LINK mem_callbacks
00:03:57.768    LINK spdk_nvme_identify
00:03:57.768    LINK scheduler
00:03:57.768    LINK hello_sock
00:03:57.768    LINK spdk_nvme
00:03:57.768    LINK thread
00:03:57.768    LINK spdk_top
00:03:57.768    CC test/nvme/aer/aer.o
00:03:57.768    CC test/nvme/reset/reset.o
00:03:57.768    CC test/nvme/startup/startup.o
00:03:57.768    CC test/nvme/err_injection/err_injection.o
00:03:57.768    CC test/nvme/simple_copy/simple_copy.o
00:03:57.768    CC test/nvme/boot_partition/boot_partition.o
00:03:57.768    CC test/nvme/cuse/cuse.o
00:03:57.768    CC test/nvme/sgl/sgl.o
00:03:57.768    CC test/nvme/fdp/fdp.o
00:03:57.768    CC test/nvme/e2edp/nvme_dp.o
00:03:57.768    CC test/nvme/overhead/overhead.o
00:03:57.768    CC test/nvme/compliance/nvme_compliance.o
00:03:57.768    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:57.768    CC test/nvme/reserve/reserve.o
00:03:57.768    CC test/nvme/connect_stress/connect_stress.o
00:03:57.768    LINK spdk_bdev
00:03:57.768    CC test/nvme/fused_ordering/fused_ordering.o
00:03:57.768    LINK spdk_nvme_perf
00:03:57.768    CC test/accel/dif/dif.o
00:03:57.768    CC test/blobfs/mkfs/mkfs.o
00:03:57.768    LINK idxd_perf
00:03:57.768    LINK vhost
00:03:57.768    LINK vhost_fuzz
00:03:57.768    CC test/lvol/esnap/esnap.o
00:03:58.026    LINK boot_partition
00:03:58.026    LINK startup
00:03:58.026    LINK err_injection
00:03:58.026    LINK connect_stress
00:03:58.026    LINK doorbell_aers
00:03:58.026    LINK fused_ordering
00:03:58.026    LINK reserve
00:03:58.026    LINK mkfs
00:03:58.026    LINK simple_copy
00:03:58.026    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:58.026    CC examples/nvme/hotplug/hotplug.o
00:03:58.026    CC examples/nvme/abort/abort.o
00:03:58.026    CC examples/nvme/arbitration/arbitration.o
00:03:58.026    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:58.026    CC examples/nvme/reconnect/reconnect.o
00:03:58.026    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:58.026    CC examples/nvme/hello_world/hello_world.o
00:03:58.026    LINK reset
00:03:58.026    LINK sgl
00:03:58.026    LINK nvme_dp
00:03:58.026    LINK aer
00:03:58.026    LINK overhead
00:03:58.026    CC examples/accel/perf/accel_perf.o
00:03:58.026    CC examples/blob/cli/blobcli.o
00:03:58.026    CC examples/fsdev/hello_world/hello_fsdev.o
00:03:58.026    CC examples/blob/hello_world/hello_blob.o
00:03:58.026    LINK memory_ut
00:03:58.284    LINK nvme_compliance
00:03:58.284    LINK fdp
00:03:58.284    LINK pmr_persistence
00:03:58.284    LINK cmb_copy
00:03:58.284    LINK hello_world
00:03:58.284    LINK hotplug
00:03:58.284    LINK hello_blob
00:03:58.544    LINK arbitration
00:03:58.544    LINK hello_fsdev
00:03:58.544    LINK reconnect
00:03:58.544    LINK abort
00:03:58.544    LINK dif
00:03:58.544    LINK nvme_manage
00:03:58.804    LINK accel_perf
00:03:58.804    LINK blobcli
00:03:59.063    CC test/bdev/bdevio/bdevio.o
00:03:59.063    CC examples/bdev/hello_world/hello_bdev.o
00:03:59.063    LINK iscsi_fuzz
00:03:59.063    CC examples/bdev/bdevperf/bdevperf.o
00:03:59.323    LINK cuse
00:03:59.323    LINK hello_bdev
00:03:59.323    LINK bdevio
00:03:59.892    LINK bdevperf
00:04:00.151    CC examples/nvmf/nvmf/nvmf.o
00:04:00.409    LINK nvmf
00:04:03.701    LINK esnap
00:04:03.701  
00:04:03.701  real	1m12.992s
00:04:03.701  user	18m34.275s
00:04:03.701  sys	4m10.773s
00:04:03.701   10:54:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:03.701   10:54:20 make -- common/autotest_common.sh@10 -- $ set +x
00:04:03.701  ************************************
00:04:03.701  END TEST make
00:04:03.701  ************************************
00:04:03.701   10:54:20  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:04:03.701   10:54:20  -- pm/common@29 -- $ signal_monitor_resources TERM
00:04:03.701   10:54:20  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:04:03.701   10:54:20  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.701   10:54:20  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:04:03.701   10:54:20  -- pm/common@44 -- $ pid=19609
00:04:03.701   10:54:20  -- pm/common@50 -- $ kill -TERM 19609
00:04:03.701   10:54:20  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.701   10:54:20  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:04:03.701   10:54:20  -- pm/common@44 -- $ pid=19611
00:04:03.701   10:54:20  -- pm/common@50 -- $ kill -TERM 19611
00:04:03.701   10:54:20  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.701   10:54:20  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:04:03.701   10:54:20  -- pm/common@44 -- $ pid=19613
00:04:03.701   10:54:20  -- pm/common@50 -- $ kill -TERM 19613
00:04:03.701   10:54:20  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.701   10:54:20  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:04:03.701   10:54:20  -- pm/common@44 -- $ pid=19642
00:04:03.701   10:54:20  -- pm/common@50 -- $ sudo -E kill -TERM 19642
00:04:03.701   10:54:20  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:04:03.701   10:54:20  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf
00:04:03.701    10:54:20  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:03.701     10:54:20  -- common/autotest_common.sh@1711 -- # lcov --version
00:04:03.701     10:54:20  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:03.701    10:54:20  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:03.701    10:54:20  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:03.701    10:54:20  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:03.701    10:54:20  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:03.701    10:54:20  -- scripts/common.sh@336 -- # IFS=.-:
00:04:03.701    10:54:20  -- scripts/common.sh@336 -- # read -ra ver1
00:04:03.701    10:54:20  -- scripts/common.sh@337 -- # IFS=.-:
00:04:03.701    10:54:20  -- scripts/common.sh@337 -- # read -ra ver2
00:04:03.701    10:54:20  -- scripts/common.sh@338 -- # local 'op=<'
00:04:03.701    10:54:20  -- scripts/common.sh@340 -- # ver1_l=2
00:04:03.701    10:54:20  -- scripts/common.sh@341 -- # ver2_l=1
00:04:03.701    10:54:20  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:03.701    10:54:20  -- scripts/common.sh@344 -- # case "$op" in
00:04:03.701    10:54:20  -- scripts/common.sh@345 -- # : 1
00:04:03.701    10:54:20  -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:03.701    10:54:20  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:03.701     10:54:20  -- scripts/common.sh@365 -- # decimal 1
00:04:03.701     10:54:20  -- scripts/common.sh@353 -- # local d=1
00:04:03.701     10:54:20  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:03.701     10:54:20  -- scripts/common.sh@355 -- # echo 1
00:04:03.701    10:54:20  -- scripts/common.sh@365 -- # ver1[v]=1
00:04:03.701     10:54:20  -- scripts/common.sh@366 -- # decimal 2
00:04:03.701     10:54:20  -- scripts/common.sh@353 -- # local d=2
00:04:03.961     10:54:20  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:03.961     10:54:20  -- scripts/common.sh@355 -- # echo 2
00:04:03.961    10:54:20  -- scripts/common.sh@366 -- # ver2[v]=2
00:04:03.961    10:54:20  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:03.961    10:54:20  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:03.961    10:54:20  -- scripts/common.sh@368 -- # return 0
00:04:03.961    10:54:20  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:03.961    10:54:20  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:03.961  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:03.961  		--rc genhtml_branch_coverage=1
00:04:03.961  		--rc genhtml_function_coverage=1
00:04:03.961  		--rc genhtml_legend=1
00:04:03.961  		--rc geninfo_all_blocks=1
00:04:03.961  		--rc geninfo_unexecuted_blocks=1
00:04:03.961  		
00:04:03.961  		'
00:04:03.961    10:54:20  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:03.961  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:03.961  		--rc genhtml_branch_coverage=1
00:04:03.961  		--rc genhtml_function_coverage=1
00:04:03.961  		--rc genhtml_legend=1
00:04:03.961  		--rc geninfo_all_blocks=1
00:04:03.961  		--rc geninfo_unexecuted_blocks=1
00:04:03.961  		
00:04:03.961  		'
00:04:03.961    10:54:20  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:03.961  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:03.961  		--rc genhtml_branch_coverage=1
00:04:03.961  		--rc genhtml_function_coverage=1
00:04:03.961  		--rc genhtml_legend=1
00:04:03.961  		--rc geninfo_all_blocks=1
00:04:03.961  		--rc geninfo_unexecuted_blocks=1
00:04:03.961  		
00:04:03.961  		'
00:04:03.961    10:54:20  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:03.961  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:03.961  		--rc genhtml_branch_coverage=1
00:04:03.961  		--rc genhtml_function_coverage=1
00:04:03.961  		--rc genhtml_legend=1
00:04:03.961  		--rc geninfo_all_blocks=1
00:04:03.961  		--rc geninfo_unexecuted_blocks=1
00:04:03.961  		
00:04:03.961  		'
00:04:03.961   10:54:20  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:04:03.961     10:54:20  -- nvmf/common.sh@7 -- # uname -s
00:04:03.961    10:54:20  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:04:03.961    10:54:20  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:04:03.961    10:54:20  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:04:03.961    10:54:20  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:04:03.961    10:54:20  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:04:03.961    10:54:20  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:04:03.961    10:54:20  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:04:03.961    10:54:20  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:04:03.961    10:54:20  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:04:03.961     10:54:20  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:04:03.961    10:54:20  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:04:03.961    10:54:20  -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:04:03.961    10:54:20  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:04:03.961    10:54:20  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:04:03.961    10:54:20  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:04:03.961    10:54:20  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:04:03.961    10:54:20  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:04:03.961     10:54:20  -- scripts/common.sh@15 -- # shopt -s extglob
00:04:03.961     10:54:20  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:04:03.961     10:54:20  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:04:03.961     10:54:20  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:04:03.961      10:54:20  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:03.961      10:54:20  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:03.961      10:54:20  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:03.961      10:54:20  -- paths/export.sh@5 -- # export PATH
00:04:03.961      10:54:20  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:04:03.961    10:54:20  -- nvmf/common.sh@51 -- # : 0
00:04:03.961    10:54:20  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:04:03.961    10:54:20  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:04:03.961    10:54:20  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:04:03.961    10:54:20  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:04:03.961    10:54:20  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:04:03.961    10:54:20  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:04:03.961  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:04:03.961    10:54:20  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:04:03.961    10:54:20  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:04:03.961    10:54:20  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:04:03.961   10:54:20  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:04:03.961    10:54:20  -- spdk/autotest.sh@32 -- # uname -s
00:04:03.961   10:54:20  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:04:03.961   10:54:20  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:04:03.961   10:54:20  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:04:03.961   10:54:20  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:04:03.961   10:54:20  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps
00:04:03.961   10:54:20  -- spdk/autotest.sh@44 -- # modprobe nbd
00:04:03.961    10:54:20  -- spdk/autotest.sh@46 -- # type -P udevadm
00:04:03.961   10:54:20  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:04:03.961   10:54:20  -- spdk/autotest.sh@48 -- # udevadm_pid=88376
00:04:03.961   10:54:20  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:04:03.961   10:54:20  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:04:03.961   10:54:20  -- pm/common@17 -- # local monitor
00:04:03.961   10:54:20  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.961   10:54:20  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.961   10:54:20  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.961    10:54:20  -- pm/common@21 -- # date +%s
00:04:03.961   10:54:20  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:04:03.961   10:54:20  -- pm/common@25 -- # sleep 1
00:04:03.961    10:54:20  -- pm/common@21 -- # date +%s
00:04:03.961    10:54:20  -- pm/common@21 -- # date +%s
00:04:03.961    10:54:20  -- pm/common@21 -- # date +%s
00:04:03.961   10:54:20  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733738060
00:04:03.961   10:54:20  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733738060
00:04:03.961   10:54:20  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733738060
00:04:03.961   10:54:20  -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733738060
00:04:03.961  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733738060_collect-cpu-load.pm.log
00:04:03.961  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733738060_collect-cpu-temp.pm.log
00:04:03.961  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733738060_collect-vmstat.pm.log
00:04:03.961  Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733738060_collect-bmc-pm.bmc.pm.log
00:04:04.923   10:54:21  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:04:04.923   10:54:21  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:04:04.923   10:54:21  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:04.923   10:54:21  -- common/autotest_common.sh@10 -- # set +x
00:04:04.923   10:54:21  -- spdk/autotest.sh@59 -- # create_test_list
00:04:04.923   10:54:21  -- common/autotest_common.sh@752 -- # xtrace_disable
00:04:04.923   10:54:21  -- common/autotest_common.sh@10 -- # set +x
00:04:04.923     10:54:21  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh
00:04:04.924    10:54:21  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:04.924   10:54:21  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:04.924   10:54:21  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output
00:04:04.924   10:54:21  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:04:04.924   10:54:21  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:04:04.924    10:54:21  -- common/autotest_common.sh@1457 -- # uname
00:04:04.924   10:54:21  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:04:04.924   10:54:21  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:04:04.924    10:54:21  -- common/autotest_common.sh@1477 -- # uname
00:04:04.924   10:54:21  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:04:04.924   10:54:21  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:04:04.924   10:54:21  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:04:04.924  lcov: LCOV version 1.15
00:04:04.924   10:54:21  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info
00:04:17.140  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:04:17.140  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:04:29.354   10:54:44  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:04:29.354   10:54:44  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:29.354   10:54:44  -- common/autotest_common.sh@10 -- # set +x
00:04:29.354   10:54:44  -- spdk/autotest.sh@78 -- # rm -f
00:04:29.354   10:54:44  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:04:29.354  0000:00:04.7 (8086 6f27): Already using the ioatdma driver
00:04:29.354  0000:00:04.6 (8086 6f26): Already using the ioatdma driver
00:04:29.354  0000:00:04.5 (8086 6f25): Already using the ioatdma driver
00:04:29.354  0000:00:04.4 (8086 6f24): Already using the ioatdma driver
00:04:29.354  0000:00:04.3 (8086 6f23): Already using the ioatdma driver
00:04:29.354  0000:00:04.2 (8086 6f22): Already using the ioatdma driver
00:04:29.354  0000:00:04.1 (8086 6f21): Already using the ioatdma driver
00:04:29.354  0000:00:04.0 (8086 6f20): Already using the ioatdma driver
00:04:29.354  0000:80:04.7 (8086 6f27): Already using the ioatdma driver
00:04:29.354  0000:80:04.6 (8086 6f26): Already using the ioatdma driver
00:04:29.354  0000:80:04.5 (8086 6f25): Already using the ioatdma driver
00:04:29.354  0000:80:04.4 (8086 6f24): Already using the ioatdma driver
00:04:29.354  0000:80:04.3 (8086 6f23): Already using the ioatdma driver
00:04:29.354  0000:80:04.2 (8086 6f22): Already using the ioatdma driver
00:04:29.354  0000:80:04.1 (8086 6f21): Already using the ioatdma driver
00:04:29.354  0000:80:04.0 (8086 6f20): Already using the ioatdma driver
00:04:29.354  0000:0d:00.0 (8086 0a54): Already using the nvme driver
00:04:29.354   10:54:45  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:04:29.354   10:54:45  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:04:29.354   10:54:45  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:04:29.354   10:54:45  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:04:29.354   10:54:45  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:04:29.354   10:54:45  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:04:29.354   10:54:45  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:04:29.354   10:54:45  -- common/autotest_common.sh@1669 -- # bdf=0000:0d:00.0
00:04:29.354   10:54:45  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:04:29.354   10:54:45  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:04:29.354   10:54:45  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:04:29.354   10:54:45  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:29.354   10:54:45  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:04:29.354   10:54:45  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:04:29.354   10:54:45  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:04:29.354   10:54:45  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:04:29.354   10:54:45  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:04:29.354   10:54:45  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:04:29.354   10:54:45  -- scripts/common.sh@390 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:04:29.354  No valid GPT data, bailing
00:04:29.354    10:54:45  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:29.354   10:54:45  -- scripts/common.sh@394 -- # pt=
00:04:29.354   10:54:45  -- scripts/common.sh@395 -- # return 1
00:04:29.354   10:54:45  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:04:29.354  1+0 records in
00:04:29.354  1+0 records out
00:04:29.354  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00170968 s, 613 MB/s
00:04:29.354   10:54:45  -- spdk/autotest.sh@105 -- # sync
00:04:29.354   10:54:45  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:04:29.354   10:54:45  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:04:29.354    10:54:45  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:04:30.729    10:54:47  -- spdk/autotest.sh@111 -- # uname -s
00:04:30.989   10:54:47  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:04:30.989   10:54:47  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:04:30.989   10:54:47  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status
00:04:31.926  Hugepages
00:04:31.926  node     hugesize     free /  total
00:04:31.926  node0   1048576kB        0 /      0
00:04:31.926  node0      2048kB        0 /      0
00:04:31.926  node1   1048576kB        0 /      0
00:04:31.926  node1      2048kB        0 /      0
00:04:31.926  
00:04:31.926  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:04:31.926  I/OAT                     0000:00:04.0    8086   6f20   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.1    8086   6f21   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.2    8086   6f22   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.3    8086   6f23   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.4    8086   6f24   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.5    8086   6f25   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.6    8086   6f26   0       ioatdma          -          -
00:04:31.926  I/OAT                     0000:00:04.7    8086   6f27   0       ioatdma          -          -
00:04:31.926  NVMe                      0000:0d:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:04:31.926  I/OAT                     0000:80:04.0    8086   6f20   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.1    8086   6f21   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.2    8086   6f22   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.3    8086   6f23   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.4    8086   6f24   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.5    8086   6f25   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.6    8086   6f26   1       ioatdma          -          -
00:04:31.926  I/OAT                     0000:80:04.7    8086   6f27   1       ioatdma          -          -
00:04:31.926    10:54:48  -- spdk/autotest.sh@117 -- # uname -s
00:04:31.926   10:54:48  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:04:31.926   10:54:48  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:04:31.926   10:54:48  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:04:33.304  0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci
00:04:33.304  0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.7 (8086 6f27): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.6 (8086 6f26): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.5 (8086 6f25): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.4 (8086 6f24): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.3 (8086 6f23): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.2 (8086 6f22): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.1 (8086 6f21): ioatdma -> vfio-pci
00:04:33.304  0000:80:04.0 (8086 6f20): ioatdma -> vfio-pci
00:04:34.242  0000:0d:00.0 (8086 0a54): nvme -> vfio-pci
00:04:34.501   10:54:51  -- common/autotest_common.sh@1517 -- # sleep 1
00:04:35.439   10:54:52  -- common/autotest_common.sh@1518 -- # bdfs=()
00:04:35.439   10:54:52  -- common/autotest_common.sh@1518 -- # local bdfs
00:04:35.439   10:54:52  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:04:35.439    10:54:52  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:04:35.439    10:54:52  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:35.439    10:54:52  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:35.439    10:54:52  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:35.439     10:54:52  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:04:35.439     10:54:52  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:35.439    10:54:52  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:04:35.439    10:54:52  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:04:35.439   10:54:52  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:04:36.818  Waiting for block devices as requested
00:04:36.818  0000:00:04.7 (8086 6f27): vfio-pci -> ioatdma
00:04:36.818  0000:00:04.6 (8086 6f26): vfio-pci -> ioatdma
00:04:36.818  0000:00:04.5 (8086 6f25): vfio-pci -> ioatdma
00:04:36.818  0000:00:04.4 (8086 6f24): vfio-pci -> ioatdma
00:04:37.077  0000:00:04.3 (8086 6f23): vfio-pci -> ioatdma
00:04:37.077  0000:00:04.2 (8086 6f22): vfio-pci -> ioatdma
00:04:37.077  0000:00:04.1 (8086 6f21): vfio-pci -> ioatdma
00:04:37.077  0000:00:04.0 (8086 6f20): vfio-pci -> ioatdma
00:04:37.336  0000:80:04.7 (8086 6f27): vfio-pci -> ioatdma
00:04:37.336  0000:80:04.6 (8086 6f26): vfio-pci -> ioatdma
00:04:37.336  0000:80:04.5 (8086 6f25): vfio-pci -> ioatdma
00:04:37.595  0000:80:04.4 (8086 6f24): vfio-pci -> ioatdma
00:04:37.596  0000:80:04.3 (8086 6f23): vfio-pci -> ioatdma
00:04:37.596  0000:80:04.2 (8086 6f22): vfio-pci -> ioatdma
00:04:37.596  0000:80:04.1 (8086 6f21): vfio-pci -> ioatdma
00:04:37.855  0000:80:04.0 (8086 6f20): vfio-pci -> ioatdma
00:04:37.855  0000:0d:00.0 (8086 0a54): vfio-pci -> nvme
00:04:38.113   10:54:54  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:04:38.113    10:54:54  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0d:00.0
00:04:38.113     10:54:54  -- common/autotest_common.sh@1487 -- # grep 0000:0d:00.0/nvme/nvme
00:04:38.113     10:54:54  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:04:38.113    10:54:54  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0
00:04:38.113    10:54:54  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0 ]]
00:04:38.113     10:54:54  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0d:00.0/nvme/nvme0
00:04:38.113    10:54:54  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:04:38.113   10:54:54  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:04:38.113   10:54:54  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:04:38.113    10:54:54  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:04:38.113    10:54:54  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:04:38.113    10:54:54  -- common/autotest_common.sh@1531 -- # grep oacs
00:04:38.113   10:54:54  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:04:38.113   10:54:54  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:04:38.113   10:54:54  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:04:38.113    10:54:54  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:04:38.113    10:54:54  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:04:38.113    10:54:54  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:04:38.113   10:54:54  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:04:38.113   10:54:54  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:04:38.113   10:54:54  -- common/autotest_common.sh@1543 -- # continue
00:04:38.113   10:54:54  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:04:38.113   10:54:54  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:38.113   10:54:54  -- common/autotest_common.sh@10 -- # set +x
00:04:38.113   10:54:54  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:04:38.113   10:54:54  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:38.113   10:54:54  -- common/autotest_common.sh@10 -- # set +x
00:04:38.113   10:54:54  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:04:39.051  0000:00:04.7 (8086 6f27): ioatdma -> vfio-pci
00:04:39.051  0000:00:04.6 (8086 6f26): ioatdma -> vfio-pci
00:04:39.051  0000:00:04.5 (8086 6f25): ioatdma -> vfio-pci
00:04:39.051  0000:00:04.4 (8086 6f24): ioatdma -> vfio-pci
00:04:39.319  0000:00:04.3 (8086 6f23): ioatdma -> vfio-pci
00:04:39.319  0000:00:04.2 (8086 6f22): ioatdma -> vfio-pci
00:04:39.319  0000:00:04.1 (8086 6f21): ioatdma -> vfio-pci
00:04:39.319  0000:00:04.0 (8086 6f20): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.7 (8086 6f27): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.6 (8086 6f26): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.5 (8086 6f25): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.4 (8086 6f24): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.3 (8086 6f23): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.2 (8086 6f22): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.1 (8086 6f21): ioatdma -> vfio-pci
00:04:39.319  0000:80:04.0 (8086 6f20): ioatdma -> vfio-pci
00:04:40.259  0000:0d:00.0 (8086 0a54): nvme -> vfio-pci
00:04:40.519   10:54:57  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:04:40.519   10:54:57  -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:40.519   10:54:57  -- common/autotest_common.sh@10 -- # set +x
00:04:40.519   10:54:57  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:04:40.519   10:54:57  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:04:40.519    10:54:57  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:04:40.519    10:54:57  -- common/autotest_common.sh@1563 -- # bdfs=()
00:04:40.519    10:54:57  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:04:40.519    10:54:57  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:04:40.519    10:54:57  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:04:40.519     10:54:57  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:04:40.519     10:54:57  -- common/autotest_common.sh@1498 -- # bdfs=()
00:04:40.519     10:54:57  -- common/autotest_common.sh@1498 -- # local bdfs
00:04:40.519     10:54:57  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:04:40.519      10:54:57  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:04:40.519      10:54:57  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:04:40.519     10:54:57  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:04:40.519     10:54:57  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:04:40.519    10:54:57  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:04:40.519     10:54:57  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0d:00.0/device
00:04:40.519    10:54:57  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:04:40.519    10:54:57  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:04:40.519    10:54:57  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:04:40.519    10:54:57  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:04:40.519    10:54:57  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0d:00.0
00:04:40.519   10:54:57  -- common/autotest_common.sh@1579 -- # [[ -z 0000:0d:00.0 ]]
00:04:40.519   10:54:57  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=99521
00:04:40.519   10:54:57  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:04:40.519   10:54:57  -- common/autotest_common.sh@1585 -- # waitforlisten 99521
00:04:40.519   10:54:57  -- common/autotest_common.sh@835 -- # '[' -z 99521 ']'
00:04:40.519   10:54:57  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:40.519   10:54:57  -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:40.519   10:54:57  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:40.519  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:40.519   10:54:57  -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:40.519   10:54:57  -- common/autotest_common.sh@10 -- # set +x
00:04:40.779  [2024-12-09 10:54:57.646433] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:04:40.779  [2024-12-09 10:54:57.646537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99521 ]
00:04:41.038  [2024-12-09 10:54:57.793552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:41.038  [2024-12-09 10:54:57.939670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:41.977   10:54:58  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:41.977   10:54:58  -- common/autotest_common.sh@868 -- # return 0
00:04:41.977   10:54:58  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:04:41.977   10:54:58  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:04:41.977   10:54:58  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0d:00.0
00:04:45.271  nvme0n1
00:04:45.271   10:55:01  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:04:45.271  [2024-12-09 10:55:01.937600] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:04:45.271  [2024-12-09 10:55:01.937649] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:04:45.271  request:
00:04:45.271  {
00:04:45.271    "nvme_ctrlr_name": "nvme0",
00:04:45.271    "password": "test",
00:04:45.271    "method": "bdev_nvme_opal_revert",
00:04:45.271    "req_id": 1
00:04:45.271  }
00:04:45.271  Got JSON-RPC error response
00:04:45.271  response:
00:04:45.271  {
00:04:45.271    "code": -32603,
00:04:45.271    "message": "Internal error"
00:04:45.271  }
00:04:45.271   10:55:01  -- common/autotest_common.sh@1591 -- # true
00:04:45.271   10:55:01  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:04:45.271   10:55:01  -- common/autotest_common.sh@1595 -- # killprocess 99521
00:04:45.271   10:55:01  -- common/autotest_common.sh@954 -- # '[' -z 99521 ']'
00:04:45.271   10:55:01  -- common/autotest_common.sh@958 -- # kill -0 99521
00:04:45.271    10:55:01  -- common/autotest_common.sh@959 -- # uname
00:04:45.271   10:55:01  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:04:45.271    10:55:01  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99521
00:04:45.271   10:55:01  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:04:45.271   10:55:01  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:04:45.271   10:55:01  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99521'
00:04:45.271  killing process with pid 99521
00:04:45.271   10:55:01  -- common/autotest_common.sh@973 -- # kill 99521
00:04:45.271   10:55:01  -- common/autotest_common.sh@978 -- # wait 99521
00:04:48.561   10:55:05  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:04:48.561   10:55:05  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:04:48.561   10:55:05  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:48.561   10:55:05  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:48.561   10:55:05  -- spdk/autotest.sh@149 -- # timing_enter lib
00:04:48.561   10:55:05  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:48.561   10:55:05  -- common/autotest_common.sh@10 -- # set +x
00:04:48.561   10:55:05  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:04:48.561   10:55:05  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:04:48.561   10:55:05  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:48.561   10:55:05  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:48.561   10:55:05  -- common/autotest_common.sh@10 -- # set +x
00:04:48.561  ************************************
00:04:48.561  START TEST env
00:04:48.561  ************************************
00:04:48.561   10:55:05 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh
00:04:48.561  * Looking for test storage...
00:04:48.561  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env
00:04:48.561    10:55:05 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:48.561     10:55:05 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:48.561     10:55:05 env -- common/autotest_common.sh@1711 -- # lcov --version
00:04:48.561    10:55:05 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:48.561    10:55:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:48.561    10:55:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:48.561    10:55:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:48.561    10:55:05 env -- scripts/common.sh@336 -- # IFS=.-:
00:04:48.561    10:55:05 env -- scripts/common.sh@336 -- # read -ra ver1
00:04:48.561    10:55:05 env -- scripts/common.sh@337 -- # IFS=.-:
00:04:48.561    10:55:05 env -- scripts/common.sh@337 -- # read -ra ver2
00:04:48.561    10:55:05 env -- scripts/common.sh@338 -- # local 'op=<'
00:04:48.561    10:55:05 env -- scripts/common.sh@340 -- # ver1_l=2
00:04:48.561    10:55:05 env -- scripts/common.sh@341 -- # ver2_l=1
00:04:48.561    10:55:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:48.561    10:55:05 env -- scripts/common.sh@344 -- # case "$op" in
00:04:48.561    10:55:05 env -- scripts/common.sh@345 -- # : 1
00:04:48.561    10:55:05 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:48.561    10:55:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:48.561     10:55:05 env -- scripts/common.sh@365 -- # decimal 1
00:04:48.561     10:55:05 env -- scripts/common.sh@353 -- # local d=1
00:04:48.561     10:55:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:48.561     10:55:05 env -- scripts/common.sh@355 -- # echo 1
00:04:48.561    10:55:05 env -- scripts/common.sh@365 -- # ver1[v]=1
00:04:48.561     10:55:05 env -- scripts/common.sh@366 -- # decimal 2
00:04:48.561     10:55:05 env -- scripts/common.sh@353 -- # local d=2
00:04:48.561     10:55:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:48.561     10:55:05 env -- scripts/common.sh@355 -- # echo 2
00:04:48.561    10:55:05 env -- scripts/common.sh@366 -- # ver2[v]=2
00:04:48.561    10:55:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:48.561    10:55:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:48.561    10:55:05 env -- scripts/common.sh@368 -- # return 0
00:04:48.561    10:55:05 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:48.561    10:55:05 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:48.561  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.561  		--rc genhtml_branch_coverage=1
00:04:48.562  		--rc genhtml_function_coverage=1
00:04:48.562  		--rc genhtml_legend=1
00:04:48.562  		--rc geninfo_all_blocks=1
00:04:48.562  		--rc geninfo_unexecuted_blocks=1
00:04:48.562  		
00:04:48.562  		'
00:04:48.562    10:55:05 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:48.562  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.562  		--rc genhtml_branch_coverage=1
00:04:48.562  		--rc genhtml_function_coverage=1
00:04:48.562  		--rc genhtml_legend=1
00:04:48.562  		--rc geninfo_all_blocks=1
00:04:48.562  		--rc geninfo_unexecuted_blocks=1
00:04:48.562  		
00:04:48.562  		'
00:04:48.562    10:55:05 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:48.562  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.562  		--rc genhtml_branch_coverage=1
00:04:48.562  		--rc genhtml_function_coverage=1
00:04:48.562  		--rc genhtml_legend=1
00:04:48.562  		--rc geninfo_all_blocks=1
00:04:48.562  		--rc geninfo_unexecuted_blocks=1
00:04:48.562  		
00:04:48.562  		'
00:04:48.562    10:55:05 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:48.562  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:48.562  		--rc genhtml_branch_coverage=1
00:04:48.562  		--rc genhtml_function_coverage=1
00:04:48.562  		--rc genhtml_legend=1
00:04:48.562  		--rc geninfo_all_blocks=1
00:04:48.562  		--rc geninfo_unexecuted_blocks=1
00:04:48.562  		
00:04:48.562  		'
00:04:48.562   10:55:05 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:04:48.562   10:55:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:48.562   10:55:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:48.562   10:55:05 env -- common/autotest_common.sh@10 -- # set +x
00:04:48.562  ************************************
00:04:48.562  START TEST env_memory
00:04:48.562  ************************************
00:04:48.562   10:55:05 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut
00:04:48.562  
00:04:48.562  
00:04:48.562       CUnit - A unit testing framework for C - Version 2.1-3
00:04:48.562       http://cunit.sourceforge.net/
00:04:48.562  
00:04:48.562  
00:04:48.562  Suite: mem_map_2mb
00:04:48.562    Test: alloc and free memory map ...[2024-12-09 10:55:05.419486] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:04:48.562  passed
00:04:48.562    Test: mem map translation ...[2024-12-09 10:55:05.460537] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:04:48.562  [2024-12-09 10:55:05.460569] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:04:48.562  [2024-12-09 10:55:05.460636] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:04:48.562  [2024-12-09 10:55:05.460655] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:04:48.562  passed
00:04:48.562    Test: mem map registration ...[2024-12-09 10:55:05.532469] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:04:48.562  [2024-12-09 10:55:05.532498] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:04:48.562  passed
00:04:48.820    Test: mem map adjacent registrations ...passed
00:04:48.820  Suite: mem_map_4kb
00:04:48.820    Test: alloc and free memory map ...[2024-12-09 10:55:05.706623] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:04:48.820  passed
00:04:48.820    Test: mem map translation ...[2024-12-09 10:55:05.752443] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234
00:04:48.820  [2024-12-09 10:55:05.752474] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096
00:04:48.820  [2024-12-09 10:55:05.772916] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:04:48.820  [2024-12-09 10:55:05.772939] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map
00:04:49.079  passed
00:04:49.079    Test: mem map registration ...[2024-12-09 10:55:05.877631] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234
00:04:49.079  [2024-12-09 10:55:05.877664] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096
00:04:49.079  passed
00:04:49.079    Test: mem map adjacent registrations ...passed
00:04:49.079  
00:04:49.079  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:49.079                suites      2      2    n/a      0        0
00:04:49.079                 tests      8      8      8      0        0
00:04:49.079               asserts    304    304    304      0      n/a
00:04:49.079  
00:04:49.079  Elapsed time =    0.632 seconds
00:04:49.079  
00:04:49.079  real	0m0.655s
00:04:49.079  user	0m0.617s
00:04:49.079  sys	0m0.036s
00:04:49.079   10:55:06 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:49.079   10:55:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:04:49.079  ************************************
00:04:49.079  END TEST env_memory
00:04:49.079  ************************************
00:04:49.079   10:55:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:04:49.079   10:55:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:49.079   10:55:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:49.079   10:55:06 env -- common/autotest_common.sh@10 -- # set +x
00:04:49.079  ************************************
00:04:49.079  START TEST env_vtophys
00:04:49.079  ************************************
00:04:49.079   10:55:06 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys
00:04:49.340  EAL: lib.eal log level changed from notice to debug
00:04:49.340  EAL: Detected lcore 0 as core 0 on socket 0
00:04:49.340  EAL: Detected lcore 1 as core 1 on socket 0
00:04:49.340  EAL: Detected lcore 2 as core 2 on socket 0
00:04:49.340  EAL: Detected lcore 3 as core 3 on socket 0
00:04:49.340  EAL: Detected lcore 4 as core 4 on socket 0
00:04:49.340  EAL: Detected lcore 5 as core 5 on socket 0
00:04:49.340  EAL: Detected lcore 6 as core 8 on socket 0
00:04:49.340  EAL: Detected lcore 7 as core 9 on socket 0
00:04:49.340  EAL: Detected lcore 8 as core 10 on socket 0
00:04:49.340  EAL: Detected lcore 9 as core 11 on socket 0
00:04:49.340  EAL: Detected lcore 10 as core 12 on socket 0
00:04:49.340  EAL: Detected lcore 11 as core 16 on socket 0
00:04:49.340  EAL: Detected lcore 12 as core 17 on socket 0
00:04:49.340  EAL: Detected lcore 13 as core 18 on socket 0
00:04:49.340  EAL: Detected lcore 14 as core 19 on socket 0
00:04:49.340  EAL: Detected lcore 15 as core 20 on socket 0
00:04:49.340  EAL: Detected lcore 16 as core 21 on socket 0
00:04:49.340  EAL: Detected lcore 17 as core 24 on socket 0
00:04:49.340  EAL: Detected lcore 18 as core 25 on socket 0
00:04:49.340  EAL: Detected lcore 19 as core 26 on socket 0
00:04:49.340  EAL: Detected lcore 20 as core 27 on socket 0
00:04:49.340  EAL: Detected lcore 21 as core 28 on socket 0
00:04:49.340  EAL: Detected lcore 22 as core 0 on socket 1
00:04:49.340  EAL: Detected lcore 23 as core 1 on socket 1
00:04:49.340  EAL: Detected lcore 24 as core 2 on socket 1
00:04:49.340  EAL: Detected lcore 25 as core 3 on socket 1
00:04:49.340  EAL: Detected lcore 26 as core 4 on socket 1
00:04:49.340  EAL: Detected lcore 27 as core 5 on socket 1
00:04:49.340  EAL: Detected lcore 28 as core 8 on socket 1
00:04:49.340  EAL: Detected lcore 29 as core 9 on socket 1
00:04:49.340  EAL: Detected lcore 30 as core 10 on socket 1
00:04:49.340  EAL: Detected lcore 31 as core 11 on socket 1
00:04:49.340  EAL: Detected lcore 32 as core 12 on socket 1
00:04:49.340  EAL: Detected lcore 33 as core 16 on socket 1
00:04:49.340  EAL: Detected lcore 34 as core 17 on socket 1
00:04:49.340  EAL: Detected lcore 35 as core 18 on socket 1
00:04:49.340  EAL: Detected lcore 36 as core 19 on socket 1
00:04:49.340  EAL: Detected lcore 37 as core 20 on socket 1
00:04:49.340  EAL: Detected lcore 38 as core 21 on socket 1
00:04:49.340  EAL: Detected lcore 39 as core 24 on socket 1
00:04:49.340  EAL: Detected lcore 40 as core 25 on socket 1
00:04:49.340  EAL: Detected lcore 41 as core 26 on socket 1
00:04:49.340  EAL: Detected lcore 42 as core 27 on socket 1
00:04:49.340  EAL: Detected lcore 43 as core 28 on socket 1
00:04:49.340  EAL: Detected lcore 44 as core 0 on socket 0
00:04:49.340  EAL: Detected lcore 45 as core 1 on socket 0
00:04:49.340  EAL: Detected lcore 46 as core 2 on socket 0
00:04:49.340  EAL: Detected lcore 47 as core 3 on socket 0
00:04:49.340  EAL: Detected lcore 48 as core 4 on socket 0
00:04:49.340  EAL: Detected lcore 49 as core 5 on socket 0
00:04:49.340  EAL: Detected lcore 50 as core 8 on socket 0
00:04:49.340  EAL: Detected lcore 51 as core 9 on socket 0
00:04:49.340  EAL: Detected lcore 52 as core 10 on socket 0
00:04:49.340  EAL: Detected lcore 53 as core 11 on socket 0
00:04:49.340  EAL: Detected lcore 54 as core 12 on socket 0
00:04:49.340  EAL: Detected lcore 55 as core 16 on socket 0
00:04:49.340  EAL: Detected lcore 56 as core 17 on socket 0
00:04:49.340  EAL: Detected lcore 57 as core 18 on socket 0
00:04:49.340  EAL: Detected lcore 58 as core 19 on socket 0
00:04:49.340  EAL: Detected lcore 59 as core 20 on socket 0
00:04:49.340  EAL: Detected lcore 60 as core 21 on socket 0
00:04:49.340  EAL: Detected lcore 61 as core 24 on socket 0
00:04:49.340  EAL: Detected lcore 62 as core 25 on socket 0
00:04:49.340  EAL: Detected lcore 63 as core 26 on socket 0
00:04:49.340  EAL: Detected lcore 64 as core 27 on socket 0
00:04:49.340  EAL: Detected lcore 65 as core 28 on socket 0
00:04:49.340  EAL: Detected lcore 66 as core 0 on socket 1
00:04:49.340  EAL: Detected lcore 67 as core 1 on socket 1
00:04:49.340  EAL: Detected lcore 68 as core 2 on socket 1
00:04:49.340  EAL: Detected lcore 69 as core 3 on socket 1
00:04:49.340  EAL: Detected lcore 70 as core 4 on socket 1
00:04:49.340  EAL: Detected lcore 71 as core 5 on socket 1
00:04:49.340  EAL: Detected lcore 72 as core 8 on socket 1
00:04:49.340  EAL: Detected lcore 73 as core 9 on socket 1
00:04:49.340  EAL: Detected lcore 74 as core 10 on socket 1
00:04:49.340  EAL: Detected lcore 75 as core 11 on socket 1
00:04:49.340  EAL: Detected lcore 76 as core 12 on socket 1
00:04:49.340  EAL: Detected lcore 77 as core 16 on socket 1
00:04:49.340  EAL: Detected lcore 78 as core 17 on socket 1
00:04:49.340  EAL: Detected lcore 79 as core 18 on socket 1
00:04:49.340  EAL: Detected lcore 80 as core 19 on socket 1
00:04:49.340  EAL: Detected lcore 81 as core 20 on socket 1
00:04:49.340  EAL: Detected lcore 82 as core 21 on socket 1
00:04:49.340  EAL: Detected lcore 83 as core 24 on socket 1
00:04:49.340  EAL: Detected lcore 84 as core 25 on socket 1
00:04:49.340  EAL: Detected lcore 85 as core 26 on socket 1
00:04:49.340  EAL: Detected lcore 86 as core 27 on socket 1
00:04:49.340  EAL: Detected lcore 87 as core 28 on socket 1
00:04:49.340  EAL: Maximum logical cores by configuration: 128
00:04:49.340  EAL: Detected CPU lcores: 88
00:04:49.340  EAL: Detected NUMA nodes: 2
00:04:49.340  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:04:49.340  EAL: Detected shared linkage of DPDK
00:04:49.340  EAL: No shared files mode enabled, IPC will be disabled
00:04:49.340  EAL: No shared files mode enabled, IPC is disabled
00:04:49.340  EAL: Bus pci wants IOVA as 'DC'
00:04:49.340  EAL: Bus auxiliary wants IOVA as 'DC'
00:04:49.340  EAL: Bus vdev wants IOVA as 'DC'
00:04:49.340  EAL: Buses did not request a specific IOVA mode.
00:04:49.340  EAL: IOMMU is available, selecting IOVA as VA mode.
00:04:49.340  EAL: Selected IOVA mode 'VA'
00:04:49.340  EAL: Probing VFIO support...
00:04:49.340  EAL: IOMMU type 1 (Type 1) is supported
00:04:49.340  EAL: IOMMU type 7 (sPAPR) is not supported
00:04:49.340  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:04:49.340  EAL: VFIO support initialized
00:04:49.340  EAL: Ask a virtual area of 0x2e000 bytes
00:04:49.340  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:04:49.340  EAL: Setting up physically contiguous memory...
00:04:49.340  EAL: Setting maximum number of open files to 524288
00:04:49.340  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:04:49.340  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:04:49.340  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:04:49.340  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.340  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:04:49.340  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:04:49.341  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:04:49.341  EAL: Ask a virtual area of 0x61000 bytes
00:04:49.341  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:04:49.341  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:04:49.341  EAL: Ask a virtual area of 0x400000000 bytes
00:04:49.341  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:04:49.341  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:04:49.341  EAL: Hugepages will be freed exactly as allocated.
00:04:49.341  EAL: No shared files mode enabled, IPC is disabled
00:04:49.341  EAL: No shared files mode enabled, IPC is disabled
00:04:49.341  EAL: TSC frequency is ~2200000 KHz
00:04:49.341  EAL: Main lcore 0 is ready (tid=7fb4ad332b40;cpuset=[0])
00:04:49.341  EAL: Trying to obtain current memory policy.
00:04:49.341  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.341  EAL: Restoring previous memory policy: 0
00:04:49.341  EAL: request: mp_malloc_sync
00:04:49.341  EAL: No shared files mode enabled, IPC is disabled
00:04:49.341  EAL: Heap on socket 0 was expanded by 2MB
00:04:49.341  EAL: No shared files mode enabled, IPC is disabled
00:04:49.341  EAL: No shared files mode enabled, IPC is disabled
00:04:49.341  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:04:49.341  EAL: Mem event callback 'spdk:(nil)' registered
00:04:49.341  
00:04:49.341  
00:04:49.341       CUnit - A unit testing framework for C - Version 2.1-3
00:04:49.341       http://cunit.sourceforge.net/
00:04:49.341  
00:04:49.341  
00:04:49.341  Suite: components_suite
00:04:49.600    Test: vtophys_malloc_test ...passed
00:04:49.600    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:04:49.600  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.601  EAL: Restoring previous memory policy: 4
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was expanded by 4MB
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was shrunk by 4MB
00:04:49.601  EAL: Trying to obtain current memory policy.
00:04:49.601  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.601  EAL: Restoring previous memory policy: 4
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was expanded by 6MB
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was shrunk by 6MB
00:04:49.601  EAL: Trying to obtain current memory policy.
00:04:49.601  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.601  EAL: Restoring previous memory policy: 4
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was expanded by 10MB
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was shrunk by 10MB
00:04:49.601  EAL: Trying to obtain current memory policy.
00:04:49.601  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.601  EAL: Restoring previous memory policy: 4
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was expanded by 18MB
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was shrunk by 18MB
00:04:49.601  EAL: Trying to obtain current memory policy.
00:04:49.601  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.601  EAL: Restoring previous memory policy: 4
00:04:49.601  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.601  EAL: request: mp_malloc_sync
00:04:49.601  EAL: No shared files mode enabled, IPC is disabled
00:04:49.601  EAL: Heap on socket 0 was expanded by 34MB
00:04:49.860  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.860  EAL: request: mp_malloc_sync
00:04:49.860  EAL: No shared files mode enabled, IPC is disabled
00:04:49.860  EAL: Heap on socket 0 was shrunk by 34MB
00:04:49.860  EAL: Trying to obtain current memory policy.
00:04:49.860  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:49.860  EAL: Restoring previous memory policy: 4
00:04:49.860  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.860  EAL: request: mp_malloc_sync
00:04:49.860  EAL: No shared files mode enabled, IPC is disabled
00:04:49.860  EAL: Heap on socket 0 was expanded by 66MB
00:04:49.860  EAL: Calling mem event callback 'spdk:(nil)'
00:04:49.860  EAL: request: mp_malloc_sync
00:04:49.860  EAL: No shared files mode enabled, IPC is disabled
00:04:49.860  EAL: Heap on socket 0 was shrunk by 66MB
00:04:49.860  EAL: Trying to obtain current memory policy.
00:04:49.860  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:50.119  EAL: Restoring previous memory policy: 4
00:04:50.119  EAL: Calling mem event callback 'spdk:(nil)'
00:04:50.119  EAL: request: mp_malloc_sync
00:04:50.119  EAL: No shared files mode enabled, IPC is disabled
00:04:50.119  EAL: Heap on socket 0 was expanded by 130MB
00:04:50.119  EAL: Calling mem event callback 'spdk:(nil)'
00:04:50.119  EAL: request: mp_malloc_sync
00:04:50.119  EAL: No shared files mode enabled, IPC is disabled
00:04:50.119  EAL: Heap on socket 0 was shrunk by 130MB
00:04:50.378  EAL: Trying to obtain current memory policy.
00:04:50.378  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:50.378  EAL: Restoring previous memory policy: 4
00:04:50.378  EAL: Calling mem event callback 'spdk:(nil)'
00:04:50.378  EAL: request: mp_malloc_sync
00:04:50.379  EAL: No shared files mode enabled, IPC is disabled
00:04:50.379  EAL: Heap on socket 0 was expanded by 258MB
00:04:50.638  EAL: Calling mem event callback 'spdk:(nil)'
00:04:50.897  EAL: request: mp_malloc_sync
00:04:50.897  EAL: No shared files mode enabled, IPC is disabled
00:04:50.898  EAL: Heap on socket 0 was shrunk by 258MB
00:04:51.157  EAL: Trying to obtain current memory policy.
00:04:51.158  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:51.158  EAL: Restoring previous memory policy: 4
00:04:51.158  EAL: Calling mem event callback 'spdk:(nil)'
00:04:51.158  EAL: request: mp_malloc_sync
00:04:51.158  EAL: No shared files mode enabled, IPC is disabled
00:04:51.158  EAL: Heap on socket 0 was expanded by 514MB
00:04:52.134  EAL: Calling mem event callback 'spdk:(nil)'
00:04:52.134  EAL: request: mp_malloc_sync
00:04:52.134  EAL: No shared files mode enabled, IPC is disabled
00:04:52.134  EAL: Heap on socket 0 was shrunk by 514MB
00:04:52.705  EAL: Trying to obtain current memory policy.
00:04:52.705  EAL: Setting policy MPOL_PREFERRED for socket 0
00:04:52.705  EAL: Restoring previous memory policy: 4
00:04:52.705  EAL: Calling mem event callback 'spdk:(nil)'
00:04:52.705  EAL: request: mp_malloc_sync
00:04:52.705  EAL: No shared files mode enabled, IPC is disabled
00:04:52.705  EAL: Heap on socket 0 was expanded by 1026MB
00:04:54.086  EAL: Calling mem event callback 'spdk:(nil)'
00:04:54.346  EAL: request: mp_malloc_sync
00:04:54.346  EAL: No shared files mode enabled, IPC is disabled
00:04:54.346  EAL: Heap on socket 0 was shrunk by 1026MB
00:04:55.727  passed
00:04:55.727  
00:04:55.727  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:55.728                suites      1      1    n/a      0        0
00:04:55.728                 tests      2      2      2      0        0
00:04:55.728               asserts    497    497    497      0      n/a
00:04:55.728  
00:04:55.728  Elapsed time =    6.195 seconds
00:04:55.728  EAL: Calling mem event callback 'spdk:(nil)'
00:04:55.728  EAL: request: mp_malloc_sync
00:04:55.728  EAL: No shared files mode enabled, IPC is disabled
00:04:55.728  EAL: Heap on socket 0 was shrunk by 2MB
00:04:55.728  EAL: No shared files mode enabled, IPC is disabled
00:04:55.728  EAL: No shared files mode enabled, IPC is disabled
00:04:55.728  EAL: No shared files mode enabled, IPC is disabled
00:04:55.728  
00:04:55.728  real	0m6.423s
00:04:55.728  user	0m5.478s
00:04:55.728  sys	0m0.894s
00:04:55.728   10:55:12 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:55.728   10:55:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:04:55.728  ************************************
00:04:55.728  END TEST env_vtophys
00:04:55.728  ************************************
00:04:55.728   10:55:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:04:55.728   10:55:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:55.728   10:55:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:55.728   10:55:12 env -- common/autotest_common.sh@10 -- # set +x
00:04:55.728  ************************************
00:04:55.728  START TEST env_pci
00:04:55.728  ************************************
00:04:55.728   10:55:12 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut
00:04:55.728  
00:04:55.728  
00:04:55.728       CUnit - A unit testing framework for C - Version 2.1-3
00:04:55.728       http://cunit.sourceforge.net/
00:04:55.728  
00:04:55.728  
00:04:55.728  Suite: pci
00:04:55.728    Test: pci_hook ...[2024-12-09 10:55:12.565422] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102123 has claimed it
00:04:55.728  EAL: Cannot find device (10000:00:01.0)
00:04:55.728  EAL: Failed to attach device on primary process
00:04:55.728  passed
00:04:55.728  
00:04:55.728  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:55.728                suites      1      1    n/a      0        0
00:04:55.728                 tests      1      1      1      0        0
00:04:55.728               asserts     25     25     25      0      n/a
00:04:55.728  
00:04:55.728  Elapsed time =    0.033 seconds
00:04:55.728  
00:04:55.728  real	0m0.083s
00:04:55.728  user	0m0.031s
00:04:55.728  sys	0m0.052s
00:04:55.728   10:55:12 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:55.728   10:55:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:04:55.728  ************************************
00:04:55.728  END TEST env_pci
00:04:55.728  ************************************
00:04:55.728   10:55:12 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:04:55.728    10:55:12 env -- env/env.sh@15 -- # uname
00:04:55.728   10:55:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:04:55.728   10:55:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:04:55.728   10:55:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:55.728   10:55:12 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:04:55.728   10:55:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:55.728   10:55:12 env -- common/autotest_common.sh@10 -- # set +x
00:04:55.728  ************************************
00:04:55.728  START TEST env_dpdk_post_init
00:04:55.728  ************************************
00:04:55.728   10:55:12 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:04:55.728  EAL: Detected CPU lcores: 88
00:04:55.728  EAL: Detected NUMA nodes: 2
00:04:55.728  EAL: Detected shared linkage of DPDK
00:04:55.728  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:55.987  EAL: Selected IOVA mode 'VA'
00:04:55.987  EAL: VFIO support initialized
00:04:55.987  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:55.987  EAL: Using IOMMU type 1 (Type 1)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f20) device: 0000:00:04.0 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f21) device: 0000:00:04.1 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f22) device: 0000:00:04.2 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f23) device: 0000:00:04.3 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f24) device: 0000:00:04.4 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f25) device: 0000:00:04.5 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f26) device: 0000:00:04.6 (socket 0)
00:04:55.987  EAL: Ignore mapping IO port bar(1)
00:04:55.987  EAL: Probe PCI driver: spdk_ioat (8086:6f27) device: 0000:00:04.7 (socket 0)
00:04:56.929  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0d:00.0 (socket 0)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f20) device: 0000:80:04.0 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f21) device: 0000:80:04.1 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f22) device: 0000:80:04.2 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f23) device: 0000:80:04.3 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f24) device: 0000:80:04.4 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f25) device: 0000:80:04.5 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f26) device: 0000:80:04.6 (socket 1)
00:04:56.929  EAL: Ignore mapping IO port bar(1)
00:04:56.929  EAL: Probe PCI driver: spdk_ioat (8086:6f27) device: 0000:80:04.7 (socket 1)
00:05:00.219  EAL: Releasing PCI mapped resource for 0000:0d:00.0
00:05:00.219  EAL: Calling pci_unmap_resource for 0000:0d:00.0 at 0x202001020000
00:05:00.219  Starting DPDK initialization...
00:05:00.219  Starting SPDK post initialization...
00:05:00.219  SPDK NVMe probe
00:05:00.219  Attaching to 0000:0d:00.0
00:05:00.219  Attached to 0000:0d:00.0
00:05:00.219  Cleaning up...
00:05:00.219  
00:05:00.219  real	0m4.520s
00:05:00.219  user	0m3.083s
00:05:00.219  sys	0m0.492s
00:05:00.219   10:55:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:00.219   10:55:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:05:00.219  ************************************
00:05:00.219  END TEST env_dpdk_post_init
00:05:00.219  ************************************
00:05:00.219    10:55:17 env -- env/env.sh@26 -- # uname
00:05:00.219   10:55:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:05:00.219   10:55:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:05:00.219   10:55:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:00.219   10:55:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:00.219   10:55:17 env -- common/autotest_common.sh@10 -- # set +x
00:05:00.478  ************************************
00:05:00.478  START TEST env_mem_callbacks
00:05:00.478  ************************************
00:05:00.478   10:55:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:05:00.478  EAL: Detected CPU lcores: 88
00:05:00.478  EAL: Detected NUMA nodes: 2
00:05:00.478  EAL: Detected shared linkage of DPDK
00:05:00.478  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:05:00.478  EAL: Selected IOVA mode 'VA'
00:05:00.478  EAL: VFIO support initialized
00:05:00.478  TELEMETRY: No legacy callbacks, legacy socket not created
00:05:00.478  
00:05:00.478  
00:05:00.478       CUnit - A unit testing framework for C - Version 2.1-3
00:05:00.478       http://cunit.sourceforge.net/
00:05:00.478  
00:05:00.478  
00:05:00.478  Suite: memory
00:05:00.478    Test: test ...
00:05:00.478  register 0x200000200000 2097152
00:05:00.478  malloc 3145728
00:05:00.478  register 0x200000400000 4194304
00:05:00.478  buf 0x2000004fffc0 len 3145728 PASSED
00:05:00.478  malloc 64
00:05:00.478  buf 0x2000004ffec0 len 64 PASSED
00:05:00.478  malloc 4194304
00:05:00.478  register 0x200000800000 6291456
00:05:00.478  buf 0x2000009fffc0 len 4194304 PASSED
00:05:00.478  free 0x2000004fffc0 3145728
00:05:00.478  free 0x2000004ffec0 64
00:05:00.478  unregister 0x200000400000 4194304 PASSED
00:05:00.478  free 0x2000009fffc0 4194304
00:05:00.478  unregister 0x200000800000 6291456 PASSED
00:05:00.478  malloc 8388608
00:05:00.478  register 0x200000400000 10485760
00:05:00.478  buf 0x2000005fffc0 len 8388608 PASSED
00:05:00.478  free 0x2000005fffc0 8388608
00:05:00.478  unregister 0x200000400000 10485760 PASSED
00:05:00.478  passed
00:05:00.478  
00:05:00.478  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:00.478                suites      1      1    n/a      0        0
00:05:00.478                 tests      1      1      1      0        0
00:05:00.478               asserts     15     15     15      0      n/a
00:05:00.478  
00:05:00.478  Elapsed time =    0.044 seconds
00:05:00.478  
00:05:00.478  real	0m0.143s
00:05:00.478  user	0m0.072s
00:05:00.478  sys	0m0.071s
00:05:00.478   10:55:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:00.478   10:55:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:05:00.478  ************************************
00:05:00.478  END TEST env_mem_callbacks
00:05:00.478  ************************************
00:05:00.478  
00:05:00.478  real	0m12.174s
00:05:00.478  user	0m9.447s
00:05:00.478  sys	0m1.752s
00:05:00.478   10:55:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:00.478   10:55:17 env -- common/autotest_common.sh@10 -- # set +x
00:05:00.478  ************************************
00:05:00.478  END TEST env
00:05:00.478  ************************************
00:05:00.478   10:55:17  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:05:00.478   10:55:17  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:00.478   10:55:17  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:00.478   10:55:17  -- common/autotest_common.sh@10 -- # set +x
00:05:00.478  ************************************
00:05:00.478  START TEST rpc
00:05:00.478  ************************************
00:05:00.478   10:55:17 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh
00:05:00.478  * Looking for test storage...
00:05:00.478  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:00.478    10:55:17 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:00.478     10:55:17 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:05:00.738     10:55:17 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:00.738    10:55:17 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:00.738    10:55:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:00.738    10:55:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:00.738    10:55:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:00.738    10:55:17 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:00.738    10:55:17 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:00.738    10:55:17 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:00.738    10:55:17 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:00.738    10:55:17 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:00.738    10:55:17 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:00.738    10:55:17 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:00.738    10:55:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:00.738    10:55:17 rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:00.738    10:55:17 rpc -- scripts/common.sh@345 -- # : 1
00:05:00.738    10:55:17 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:00.738    10:55:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:00.738     10:55:17 rpc -- scripts/common.sh@365 -- # decimal 1
00:05:00.738     10:55:17 rpc -- scripts/common.sh@353 -- # local d=1
00:05:00.738     10:55:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:00.738     10:55:17 rpc -- scripts/common.sh@355 -- # echo 1
00:05:00.738    10:55:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:00.738     10:55:17 rpc -- scripts/common.sh@366 -- # decimal 2
00:05:00.738     10:55:17 rpc -- scripts/common.sh@353 -- # local d=2
00:05:00.738     10:55:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:00.738     10:55:17 rpc -- scripts/common.sh@355 -- # echo 2
00:05:00.738    10:55:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:00.738    10:55:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:00.738    10:55:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:00.738    10:55:17 rpc -- scripts/common.sh@368 -- # return 0
00:05:00.738    10:55:17 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:00.738    10:55:17 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:00.738  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.738  		--rc genhtml_branch_coverage=1
00:05:00.738  		--rc genhtml_function_coverage=1
00:05:00.738  		--rc genhtml_legend=1
00:05:00.738  		--rc geninfo_all_blocks=1
00:05:00.738  		--rc geninfo_unexecuted_blocks=1
00:05:00.738  		
00:05:00.738  		'
00:05:00.738    10:55:17 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:00.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.739  		--rc genhtml_branch_coverage=1
00:05:00.739  		--rc genhtml_function_coverage=1
00:05:00.739  		--rc genhtml_legend=1
00:05:00.739  		--rc geninfo_all_blocks=1
00:05:00.739  		--rc geninfo_unexecuted_blocks=1
00:05:00.739  		
00:05:00.739  		'
00:05:00.739    10:55:17 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:00.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.739  		--rc genhtml_branch_coverage=1
00:05:00.739  		--rc genhtml_function_coverage=1
00:05:00.739  		--rc genhtml_legend=1
00:05:00.739  		--rc geninfo_all_blocks=1
00:05:00.739  		--rc geninfo_unexecuted_blocks=1
00:05:00.739  		
00:05:00.739  		'
00:05:00.739    10:55:17 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:00.739  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.739  		--rc genhtml_branch_coverage=1
00:05:00.739  		--rc genhtml_function_coverage=1
00:05:00.739  		--rc genhtml_legend=1
00:05:00.739  		--rc geninfo_all_blocks=1
00:05:00.739  		--rc geninfo_unexecuted_blocks=1
00:05:00.739  		
00:05:00.739  		'
00:05:00.739   10:55:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=103259
00:05:00.739   10:55:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:05:00.739   10:55:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:00.739   10:55:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 103259
00:05:00.739   10:55:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 103259 ']'
00:05:00.739   10:55:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:00.739   10:55:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:00.739   10:55:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:00.739  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:00.739   10:55:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:00.739   10:55:17 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:00.739  [2024-12-09 10:55:17.649086] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:00.739  [2024-12-09 10:55:17.649181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103259 ]
00:05:01.088  [2024-12-09 10:55:17.757572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:01.088  [2024-12-09 10:55:17.853491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:05:01.088  [2024-12-09 10:55:17.853549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 103259' to capture a snapshot of events at runtime.
00:05:01.088  [2024-12-09 10:55:17.853567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:05:01.088  [2024-12-09 10:55:17.853579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:05:01.088  [2024-12-09 10:55:17.853591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid103259 for offline analysis/debug.
00:05:01.088  [2024-12-09 10:55:17.854729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:01.657   10:55:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:01.657   10:55:18 rpc -- common/autotest_common.sh@868 -- # return 0
00:05:01.657   10:55:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:01.657   10:55:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:01.657   10:55:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:05:01.657   10:55:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:05:01.657   10:55:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:01.657   10:55:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:01.657   10:55:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:01.657  ************************************
00:05:01.657  START TEST rpc_integrity
00:05:01.657  ************************************
00:05:01.657   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:05:01.657    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:05:01.657    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.657    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.657    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.657   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:05:01.657    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:05:01.657   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:05:01.657    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:05:01.657    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.657    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.916    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.916   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:05:01.916    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:05:01.916    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.916    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.916    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.916   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:05:01.916  {
00:05:01.916  "name": "Malloc0",
00:05:01.916  "aliases": [
00:05:01.916  "8a8c8cd4-2dd1-4a49-af4a-f7e6d39d9b27"
00:05:01.916  ],
00:05:01.916  "product_name": "Malloc disk",
00:05:01.916  "block_size": 512,
00:05:01.916  "num_blocks": 16384,
00:05:01.916  "uuid": "8a8c8cd4-2dd1-4a49-af4a-f7e6d39d9b27",
00:05:01.916  "assigned_rate_limits": {
00:05:01.916  "rw_ios_per_sec": 0,
00:05:01.916  "rw_mbytes_per_sec": 0,
00:05:01.916  "r_mbytes_per_sec": 0,
00:05:01.916  "w_mbytes_per_sec": 0
00:05:01.916  },
00:05:01.916  "claimed": false,
00:05:01.916  "zoned": false,
00:05:01.916  "supported_io_types": {
00:05:01.916  "read": true,
00:05:01.916  "write": true,
00:05:01.916  "unmap": true,
00:05:01.916  "flush": true,
00:05:01.916  "reset": true,
00:05:01.916  "nvme_admin": false,
00:05:01.916  "nvme_io": false,
00:05:01.916  "nvme_io_md": false,
00:05:01.916  "write_zeroes": true,
00:05:01.916  "zcopy": true,
00:05:01.916  "get_zone_info": false,
00:05:01.916  "zone_management": false,
00:05:01.917  "zone_append": false,
00:05:01.917  "compare": false,
00:05:01.917  "compare_and_write": false,
00:05:01.917  "abort": true,
00:05:01.917  "seek_hole": false,
00:05:01.917  "seek_data": false,
00:05:01.917  "copy": true,
00:05:01.917  "nvme_iov_md": false
00:05:01.917  },
00:05:01.917  "memory_domains": [
00:05:01.917  {
00:05:01.917  "dma_device_id": "system",
00:05:01.917  "dma_device_type": 1
00:05:01.917  },
00:05:01.917  {
00:05:01.917  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:01.917  "dma_device_type": 2
00:05:01.917  }
00:05:01.917  ],
00:05:01.917  "driver_specific": {}
00:05:01.917  }
00:05:01.917  ]'
00:05:01.917    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:05:01.917   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:05:01.917   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:05:01.917   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.917   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.917  [2024-12-09 10:55:18.719814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:05:01.917  [2024-12-09 10:55:18.719868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:01.917  [2024-12-09 10:55:18.719899] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001c580
00:05:01.917  [2024-12-09 10:55:18.719915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:01.917  [2024-12-09 10:55:18.722288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:01.917  [2024-12-09 10:55:18.722314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:05:01.917  Passthru0
00:05:01.917   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.917    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:05:01.917    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.917    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.917    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.917   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:05:01.917  {
00:05:01.917  "name": "Malloc0",
00:05:01.917  "aliases": [
00:05:01.917  "8a8c8cd4-2dd1-4a49-af4a-f7e6d39d9b27"
00:05:01.917  ],
00:05:01.917  "product_name": "Malloc disk",
00:05:01.917  "block_size": 512,
00:05:01.917  "num_blocks": 16384,
00:05:01.917  "uuid": "8a8c8cd4-2dd1-4a49-af4a-f7e6d39d9b27",
00:05:01.917  "assigned_rate_limits": {
00:05:01.917  "rw_ios_per_sec": 0,
00:05:01.917  "rw_mbytes_per_sec": 0,
00:05:01.917  "r_mbytes_per_sec": 0,
00:05:01.917  "w_mbytes_per_sec": 0
00:05:01.917  },
00:05:01.917  "claimed": true,
00:05:01.917  "claim_type": "exclusive_write",
00:05:01.917  "zoned": false,
00:05:01.917  "supported_io_types": {
00:05:01.917  "read": true,
00:05:01.917  "write": true,
00:05:01.917  "unmap": true,
00:05:01.917  "flush": true,
00:05:01.917  "reset": true,
00:05:01.917  "nvme_admin": false,
00:05:01.917  "nvme_io": false,
00:05:01.917  "nvme_io_md": false,
00:05:01.917  "write_zeroes": true,
00:05:01.917  "zcopy": true,
00:05:01.917  "get_zone_info": false,
00:05:01.917  "zone_management": false,
00:05:01.917  "zone_append": false,
00:05:01.917  "compare": false,
00:05:01.917  "compare_and_write": false,
00:05:01.917  "abort": true,
00:05:01.917  "seek_hole": false,
00:05:01.917  "seek_data": false,
00:05:01.917  "copy": true,
00:05:01.917  "nvme_iov_md": false
00:05:01.917  },
00:05:01.917  "memory_domains": [
00:05:01.917  {
00:05:01.917  "dma_device_id": "system",
00:05:01.917  "dma_device_type": 1
00:05:01.917  },
00:05:01.917  {
00:05:01.917  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:01.917  "dma_device_type": 2
00:05:01.917  }
00:05:01.917  ],
00:05:01.917  "driver_specific": {}
00:05:01.917  },
00:05:01.917  {
00:05:01.917  "name": "Passthru0",
00:05:01.917  "aliases": [
00:05:01.917  "47bb0612-7c85-5c8c-b879-af36e2ad6dbc"
00:05:01.917  ],
00:05:01.917  "product_name": "passthru",
00:05:01.917  "block_size": 512,
00:05:01.917  "num_blocks": 16384,
00:05:01.917  "uuid": "47bb0612-7c85-5c8c-b879-af36e2ad6dbc",
00:05:01.917  "assigned_rate_limits": {
00:05:01.917  "rw_ios_per_sec": 0,
00:05:01.917  "rw_mbytes_per_sec": 0,
00:05:01.917  "r_mbytes_per_sec": 0,
00:05:01.917  "w_mbytes_per_sec": 0
00:05:01.917  },
00:05:01.917  "claimed": false,
00:05:01.917  "zoned": false,
00:05:01.917  "supported_io_types": {
00:05:01.917  "read": true,
00:05:01.917  "write": true,
00:05:01.917  "unmap": true,
00:05:01.917  "flush": true,
00:05:01.917  "reset": true,
00:05:01.917  "nvme_admin": false,
00:05:01.917  "nvme_io": false,
00:05:01.917  "nvme_io_md": false,
00:05:01.917  "write_zeroes": true,
00:05:01.917  "zcopy": true,
00:05:01.917  "get_zone_info": false,
00:05:01.917  "zone_management": false,
00:05:01.917  "zone_append": false,
00:05:01.917  "compare": false,
00:05:01.917  "compare_and_write": false,
00:05:01.917  "abort": true,
00:05:01.917  "seek_hole": false,
00:05:01.917  "seek_data": false,
00:05:01.917  "copy": true,
00:05:01.917  "nvme_iov_md": false
00:05:01.917  },
00:05:01.918  "memory_domains": [
00:05:01.918  {
00:05:01.918  "dma_device_id": "system",
00:05:01.918  "dma_device_type": 1
00:05:01.918  },
00:05:01.918  {
00:05:01.918  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:01.918  "dma_device_type": 2
00:05:01.918  }
00:05:01.918  ],
00:05:01.918  "driver_specific": {
00:05:01.918  "passthru": {
00:05:01.918  "name": "Passthru0",
00:05:01.918  "base_bdev_name": "Malloc0"
00:05:01.918  }
00:05:01.918  }
00:05:01.918  }
00:05:01.918  ]'
00:05:01.918    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:05:01.918   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:05:01.918   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.918   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.918    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:05:01.918    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.918    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.918    10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.918   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:05:01.918    10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:05:01.918   10:55:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:05:01.918  
00:05:01.918  real	0m0.248s
00:05:01.918  user	0m0.148s
00:05:01.918  sys	0m0.022s
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:01.918   10:55:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:01.918  ************************************
00:05:01.918  END TEST rpc_integrity
00:05:01.918  ************************************
00:05:01.918   10:55:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:05:01.918   10:55:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:01.918   10:55:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:01.918   10:55:18 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:01.918  ************************************
00:05:01.918  START TEST rpc_plugins
00:05:01.918  ************************************
00:05:01.918   10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:05:01.918    10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:05:01.918    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.918    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:01.918    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:01.918   10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:05:01.918    10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:05:01.918    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:01.918    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:02.178    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.178   10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:05:02.178  {
00:05:02.178  "name": "Malloc1",
00:05:02.178  "aliases": [
00:05:02.178  "aef54700-d65d-4323-bca5-9eee864f3056"
00:05:02.178  ],
00:05:02.178  "product_name": "Malloc disk",
00:05:02.178  "block_size": 4096,
00:05:02.178  "num_blocks": 256,
00:05:02.178  "uuid": "aef54700-d65d-4323-bca5-9eee864f3056",
00:05:02.178  "assigned_rate_limits": {
00:05:02.178  "rw_ios_per_sec": 0,
00:05:02.178  "rw_mbytes_per_sec": 0,
00:05:02.178  "r_mbytes_per_sec": 0,
00:05:02.178  "w_mbytes_per_sec": 0
00:05:02.178  },
00:05:02.178  "claimed": false,
00:05:02.178  "zoned": false,
00:05:02.178  "supported_io_types": {
00:05:02.178  "read": true,
00:05:02.178  "write": true,
00:05:02.178  "unmap": true,
00:05:02.178  "flush": true,
00:05:02.178  "reset": true,
00:05:02.178  "nvme_admin": false,
00:05:02.178  "nvme_io": false,
00:05:02.178  "nvme_io_md": false,
00:05:02.178  "write_zeroes": true,
00:05:02.178  "zcopy": true,
00:05:02.178  "get_zone_info": false,
00:05:02.178  "zone_management": false,
00:05:02.178  "zone_append": false,
00:05:02.178  "compare": false,
00:05:02.178  "compare_and_write": false,
00:05:02.178  "abort": true,
00:05:02.178  "seek_hole": false,
00:05:02.178  "seek_data": false,
00:05:02.178  "copy": true,
00:05:02.178  "nvme_iov_md": false
00:05:02.178  },
00:05:02.178  "memory_domains": [
00:05:02.178  {
00:05:02.178  "dma_device_id": "system",
00:05:02.178  "dma_device_type": 1
00:05:02.178  },
00:05:02.178  {
00:05:02.178  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:02.178  "dma_device_type": 2
00:05:02.178  }
00:05:02.178  ],
00:05:02.178  "driver_specific": {}
00:05:02.178  }
00:05:02.178  ]'
00:05:02.178    10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:05:02.178   10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:05:02.178   10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:05:02.178   10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.178   10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:02.178   10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.178    10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:05:02.178    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.178    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:02.178    10:55:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.178   10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:05:02.178    10:55:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:05:02.178   10:55:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:05:02.178  
00:05:02.178  real	0m0.124s
00:05:02.178  user	0m0.072s
00:05:02.178  sys	0m0.014s
00:05:02.178   10:55:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:02.178   10:55:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:05:02.178  ************************************
00:05:02.178  END TEST rpc_plugins
00:05:02.178  ************************************
00:05:02.178   10:55:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:05:02.178   10:55:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:02.178   10:55:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:02.178   10:55:19 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:02.178  ************************************
00:05:02.178  START TEST rpc_trace_cmd_test
00:05:02.178  ************************************
00:05:02.178   10:55:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:05:02.178   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:05:02.178    10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:05:02.178    10:55:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.178    10:55:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:05:02.178    10:55:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.178   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:05:02.178  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid103259",
00:05:02.178  "tpoint_group_mask": "0x8",
00:05:02.178  "iscsi_conn": {
00:05:02.178  "mask": "0x2",
00:05:02.178  "tpoint_mask": "0x0"
00:05:02.178  },
00:05:02.178  "scsi": {
00:05:02.178  "mask": "0x4",
00:05:02.178  "tpoint_mask": "0x0"
00:05:02.178  },
00:05:02.178  "bdev": {
00:05:02.178  "mask": "0x8",
00:05:02.178  "tpoint_mask": "0xffffffffffffffff"
00:05:02.178  },
00:05:02.178  "nvmf_rdma": {
00:05:02.178  "mask": "0x10",
00:05:02.178  "tpoint_mask": "0x0"
00:05:02.178  },
00:05:02.178  "nvmf_tcp": {
00:05:02.178  "mask": "0x20",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "ftl": {
00:05:02.179  "mask": "0x40",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "blobfs": {
00:05:02.179  "mask": "0x80",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "dsa": {
00:05:02.179  "mask": "0x200",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "thread": {
00:05:02.179  "mask": "0x400",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "nvme_pcie": {
00:05:02.179  "mask": "0x800",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "iaa": {
00:05:02.179  "mask": "0x1000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "nvme_tcp": {
00:05:02.179  "mask": "0x2000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "bdev_nvme": {
00:05:02.179  "mask": "0x4000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "sock": {
00:05:02.179  "mask": "0x8000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "blob": {
00:05:02.179  "mask": "0x10000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "bdev_raid": {
00:05:02.179  "mask": "0x20000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  },
00:05:02.179  "scheduler": {
00:05:02.179  "mask": "0x40000",
00:05:02.179  "tpoint_mask": "0x0"
00:05:02.179  }
00:05:02.179  }'
00:05:02.179    10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:05:02.179   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:05:02.179    10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:05:02.179   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:05:02.179    10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:05:02.438   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:05:02.438    10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:05:02.438   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:05:02.438    10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:05:02.438   10:55:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:05:02.438  
00:05:02.438  real	0m0.208s
00:05:02.438  user	0m0.188s
00:05:02.438  sys	0m0.013s
00:05:02.438   10:55:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:02.438   10:55:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:05:02.438  ************************************
00:05:02.438  END TEST rpc_trace_cmd_test
00:05:02.438  ************************************
00:05:02.438   10:55:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:05:02.438   10:55:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:05:02.438   10:55:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:05:02.438   10:55:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:02.438   10:55:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:02.438   10:55:19 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:02.438  ************************************
00:05:02.438  START TEST rpc_daemon_integrity
00:05:02.438  ************************************
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:05:02.438  {
00:05:02.438  "name": "Malloc2",
00:05:02.438  "aliases": [
00:05:02.438  "201eccb7-2036-4fde-87b3-cd651d41a076"
00:05:02.438  ],
00:05:02.438  "product_name": "Malloc disk",
00:05:02.438  "block_size": 512,
00:05:02.438  "num_blocks": 16384,
00:05:02.438  "uuid": "201eccb7-2036-4fde-87b3-cd651d41a076",
00:05:02.438  "assigned_rate_limits": {
00:05:02.438  "rw_ios_per_sec": 0,
00:05:02.438  "rw_mbytes_per_sec": 0,
00:05:02.438  "r_mbytes_per_sec": 0,
00:05:02.438  "w_mbytes_per_sec": 0
00:05:02.438  },
00:05:02.438  "claimed": false,
00:05:02.438  "zoned": false,
00:05:02.438  "supported_io_types": {
00:05:02.438  "read": true,
00:05:02.438  "write": true,
00:05:02.438  "unmap": true,
00:05:02.438  "flush": true,
00:05:02.438  "reset": true,
00:05:02.438  "nvme_admin": false,
00:05:02.438  "nvme_io": false,
00:05:02.438  "nvme_io_md": false,
00:05:02.438  "write_zeroes": true,
00:05:02.438  "zcopy": true,
00:05:02.438  "get_zone_info": false,
00:05:02.438  "zone_management": false,
00:05:02.438  "zone_append": false,
00:05:02.438  "compare": false,
00:05:02.438  "compare_and_write": false,
00:05:02.438  "abort": true,
00:05:02.438  "seek_hole": false,
00:05:02.438  "seek_data": false,
00:05:02.438  "copy": true,
00:05:02.438  "nvme_iov_md": false
00:05:02.438  },
00:05:02.438  "memory_domains": [
00:05:02.438  {
00:05:02.438  "dma_device_id": "system",
00:05:02.438  "dma_device_type": 1
00:05:02.438  },
00:05:02.438  {
00:05:02.438  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:02.438  "dma_device_type": 2
00:05:02.438  }
00:05:02.438  ],
00:05:02.438  "driver_specific": {}
00:05:02.438  }
00:05:02.438  ]'
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.438  [2024-12-09 10:55:19.435177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:05:02.438  [2024-12-09 10:55:19.435250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:02.438  [2024-12-09 10:55:19.435278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001d780
00:05:02.438  [2024-12-09 10:55:19.435292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:02.438  [2024-12-09 10:55:19.437695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:02.438  [2024-12-09 10:55:19.437720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:05:02.438  Passthru0
00:05:02.438   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.438    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.698    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.698   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:05:02.698  {
00:05:02.698  "name": "Malloc2",
00:05:02.698  "aliases": [
00:05:02.698  "201eccb7-2036-4fde-87b3-cd651d41a076"
00:05:02.698  ],
00:05:02.698  "product_name": "Malloc disk",
00:05:02.698  "block_size": 512,
00:05:02.698  "num_blocks": 16384,
00:05:02.698  "uuid": "201eccb7-2036-4fde-87b3-cd651d41a076",
00:05:02.698  "assigned_rate_limits": {
00:05:02.698  "rw_ios_per_sec": 0,
00:05:02.698  "rw_mbytes_per_sec": 0,
00:05:02.698  "r_mbytes_per_sec": 0,
00:05:02.698  "w_mbytes_per_sec": 0
00:05:02.698  },
00:05:02.698  "claimed": true,
00:05:02.698  "claim_type": "exclusive_write",
00:05:02.698  "zoned": false,
00:05:02.698  "supported_io_types": {
00:05:02.698  "read": true,
00:05:02.698  "write": true,
00:05:02.698  "unmap": true,
00:05:02.698  "flush": true,
00:05:02.698  "reset": true,
00:05:02.698  "nvme_admin": false,
00:05:02.698  "nvme_io": false,
00:05:02.698  "nvme_io_md": false,
00:05:02.698  "write_zeroes": true,
00:05:02.698  "zcopy": true,
00:05:02.698  "get_zone_info": false,
00:05:02.698  "zone_management": false,
00:05:02.698  "zone_append": false,
00:05:02.698  "compare": false,
00:05:02.698  "compare_and_write": false,
00:05:02.698  "abort": true,
00:05:02.698  "seek_hole": false,
00:05:02.698  "seek_data": false,
00:05:02.698  "copy": true,
00:05:02.698  "nvme_iov_md": false
00:05:02.698  },
00:05:02.698  "memory_domains": [
00:05:02.698  {
00:05:02.698  "dma_device_id": "system",
00:05:02.698  "dma_device_type": 1
00:05:02.698  },
00:05:02.698  {
00:05:02.698  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:02.698  "dma_device_type": 2
00:05:02.698  }
00:05:02.698  ],
00:05:02.698  "driver_specific": {}
00:05:02.698  },
00:05:02.698  {
00:05:02.698  "name": "Passthru0",
00:05:02.698  "aliases": [
00:05:02.698  "291cb021-4fe6-5b33-9e30-d33933c961df"
00:05:02.698  ],
00:05:02.698  "product_name": "passthru",
00:05:02.698  "block_size": 512,
00:05:02.698  "num_blocks": 16384,
00:05:02.698  "uuid": "291cb021-4fe6-5b33-9e30-d33933c961df",
00:05:02.698  "assigned_rate_limits": {
00:05:02.698  "rw_ios_per_sec": 0,
00:05:02.698  "rw_mbytes_per_sec": 0,
00:05:02.698  "r_mbytes_per_sec": 0,
00:05:02.698  "w_mbytes_per_sec": 0
00:05:02.698  },
00:05:02.698  "claimed": false,
00:05:02.698  "zoned": false,
00:05:02.698  "supported_io_types": {
00:05:02.698  "read": true,
00:05:02.698  "write": true,
00:05:02.698  "unmap": true,
00:05:02.698  "flush": true,
00:05:02.698  "reset": true,
00:05:02.698  "nvme_admin": false,
00:05:02.698  "nvme_io": false,
00:05:02.698  "nvme_io_md": false,
00:05:02.698  "write_zeroes": true,
00:05:02.698  "zcopy": true,
00:05:02.698  "get_zone_info": false,
00:05:02.698  "zone_management": false,
00:05:02.698  "zone_append": false,
00:05:02.698  "compare": false,
00:05:02.698  "compare_and_write": false,
00:05:02.698  "abort": true,
00:05:02.698  "seek_hole": false,
00:05:02.698  "seek_data": false,
00:05:02.698  "copy": true,
00:05:02.698  "nvme_iov_md": false
00:05:02.698  },
00:05:02.698  "memory_domains": [
00:05:02.698  {
00:05:02.698  "dma_device_id": "system",
00:05:02.698  "dma_device_type": 1
00:05:02.698  },
00:05:02.698  {
00:05:02.698  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:02.698  "dma_device_type": 2
00:05:02.698  }
00:05:02.698  ],
00:05:02.699  "driver_specific": {
00:05:02.699  "passthru": {
00:05:02.699  "name": "Passthru0",
00:05:02.699  "base_bdev_name": "Malloc2"
00:05:02.699  }
00:05:02.699  }
00:05:02.699  }
00:05:02.699  ]'
00:05:02.699    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.699    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:05:02.699    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:02.699    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.699    10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:05:02.699    10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:05:02.699  
00:05:02.699  real	0m0.254s
00:05:02.699  user	0m0.158s
00:05:02.699  sys	0m0.020s
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:02.699   10:55:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:05:02.699  ************************************
00:05:02.699  END TEST rpc_daemon_integrity
00:05:02.699  ************************************
00:05:02.699   10:55:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:05:02.699   10:55:19 rpc -- rpc/rpc.sh@84 -- # killprocess 103259
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@954 -- # '[' -z 103259 ']'
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@958 -- # kill -0 103259
00:05:02.699    10:55:19 rpc -- common/autotest_common.sh@959 -- # uname
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:02.699    10:55:19 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103259
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103259'
00:05:02.699  killing process with pid 103259
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@973 -- # kill 103259
00:05:02.699   10:55:19 rpc -- common/autotest_common.sh@978 -- # wait 103259
00:05:04.607  
00:05:04.607  real	0m4.157s
00:05:04.607  user	0m4.657s
00:05:04.607  sys	0m0.769s
00:05:04.607   10:55:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:04.607   10:55:21 rpc -- common/autotest_common.sh@10 -- # set +x
00:05:04.607  ************************************
00:05:04.607  END TEST rpc
00:05:04.607  ************************************
00:05:04.868   10:55:21  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:05:04.868   10:55:21  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:04.868   10:55:21  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:04.868   10:55:21  -- common/autotest_common.sh@10 -- # set +x
00:05:04.868  ************************************
00:05:04.868  START TEST skip_rpc
00:05:04.868  ************************************
00:05:04.868   10:55:21 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:05:04.868  * Looking for test storage...
00:05:04.868  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:04.868     10:55:21 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:05:04.868     10:55:21 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@345 -- # : 1
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:04.868     10:55:21 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:04.868    10:55:21 skip_rpc -- scripts/common.sh@368 -- # return 0
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:04.868  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.868  		--rc genhtml_branch_coverage=1
00:05:04.868  		--rc genhtml_function_coverage=1
00:05:04.868  		--rc genhtml_legend=1
00:05:04.868  		--rc geninfo_all_blocks=1
00:05:04.868  		--rc geninfo_unexecuted_blocks=1
00:05:04.868  		
00:05:04.868  		'
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:04.868  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.868  		--rc genhtml_branch_coverage=1
00:05:04.868  		--rc genhtml_function_coverage=1
00:05:04.868  		--rc genhtml_legend=1
00:05:04.868  		--rc geninfo_all_blocks=1
00:05:04.868  		--rc geninfo_unexecuted_blocks=1
00:05:04.868  		
00:05:04.868  		'
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:04.868  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.868  		--rc genhtml_branch_coverage=1
00:05:04.868  		--rc genhtml_function_coverage=1
00:05:04.868  		--rc genhtml_legend=1
00:05:04.868  		--rc geninfo_all_blocks=1
00:05:04.868  		--rc geninfo_unexecuted_blocks=1
00:05:04.868  		
00:05:04.868  		'
00:05:04.868    10:55:21 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:04.868  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.868  		--rc genhtml_branch_coverage=1
00:05:04.868  		--rc genhtml_function_coverage=1
00:05:04.868  		--rc genhtml_legend=1
00:05:04.868  		--rc geninfo_all_blocks=1
00:05:04.868  		--rc geninfo_unexecuted_blocks=1
00:05:04.868  		
00:05:04.868  		'
00:05:04.868   10:55:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:04.868   10:55:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:05:04.868   10:55:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:05:04.868   10:55:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:04.868   10:55:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:04.868   10:55:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:04.868  ************************************
00:05:04.868  START TEST skip_rpc
00:05:04.868  ************************************
00:05:04.868   10:55:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:05:04.868   10:55:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104072
00:05:04.868   10:55:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:04.868   10:55:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:05:04.868   10:55:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:05:05.128  [2024-12-09 10:55:21.892281] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:05.128  [2024-12-09 10:55:21.892380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104072 ]
00:05:05.128  [2024-12-09 10:55:22.000165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:05.128  [2024-12-09 10:55:22.095320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:10.407    10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104072
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104072 ']'
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104072
00:05:10.407    10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:05:10.407   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:10.408    10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104072
00:05:10.408   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:10.408   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:10.408   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104072'
00:05:10.408  killing process with pid 104072
00:05:10.408   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104072
00:05:10.408   10:55:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104072
00:05:11.789  
00:05:11.789  real	0m6.919s
00:05:11.789  user	0m6.491s
00:05:11.789  sys	0m0.442s
00:05:11.789   10:55:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:11.789   10:55:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:11.789  ************************************
00:05:11.789  END TEST skip_rpc
00:05:11.789  ************************************
00:05:11.789   10:55:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:05:11.789   10:55:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:11.789   10:55:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:11.789   10:55:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:11.789  ************************************
00:05:11.789  START TEST skip_rpc_with_json
00:05:11.789  ************************************
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105331
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105331
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105331 ']'
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:11.789  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:11.789   10:55:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:12.049  [2024-12-09 10:55:28.861726] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:12.049  [2024-12-09 10:55:28.861898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105331 ]
00:05:12.049  [2024-12-09 10:55:28.978235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:12.309  [2024-12-09 10:55:29.081111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:12.881  [2024-12-09 10:55:29.807639] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:05:12.881  request:
00:05:12.881  {
00:05:12.881  "trtype": "tcp",
00:05:12.881  "method": "nvmf_get_transports",
00:05:12.881  "req_id": 1
00:05:12.881  }
00:05:12.881  Got JSON-RPC error response
00:05:12.881  response:
00:05:12.881  {
00:05:12.881  "code": -19,
00:05:12.881  "message": "No such device"
00:05:12.881  }
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:12.881   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:12.881  [2024-12-09 10:55:29.819761] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:05:12.882   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:12.882   10:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:05:12.882   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:12.882   10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:13.142   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:13.142   10:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:13.142  {
00:05:13.142  "subsystems": [
00:05:13.142  {
00:05:13.142  "subsystem": "fsdev",
00:05:13.142  "config": [
00:05:13.142  {
00:05:13.142  "method": "fsdev_set_opts",
00:05:13.142  "params": {
00:05:13.142  "fsdev_io_pool_size": 65535,
00:05:13.142  "fsdev_io_cache_size": 256
00:05:13.142  }
00:05:13.142  }
00:05:13.142  ]
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "vfio_user_target",
00:05:13.142  "config": null
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "keyring",
00:05:13.142  "config": []
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "iobuf",
00:05:13.142  "config": [
00:05:13.142  {
00:05:13.142  "method": "iobuf_set_options",
00:05:13.142  "params": {
00:05:13.142  "small_pool_count": 8192,
00:05:13.142  "large_pool_count": 1024,
00:05:13.142  "small_bufsize": 8192,
00:05:13.142  "large_bufsize": 135168,
00:05:13.142  "enable_numa": false
00:05:13.142  }
00:05:13.142  }
00:05:13.142  ]
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "sock",
00:05:13.142  "config": [
00:05:13.142  {
00:05:13.142  "method": "sock_set_default_impl",
00:05:13.142  "params": {
00:05:13.142  "impl_name": "posix"
00:05:13.142  }
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "method": "sock_impl_set_options",
00:05:13.142  "params": {
00:05:13.142  "impl_name": "ssl",
00:05:13.142  "recv_buf_size": 4096,
00:05:13.142  "send_buf_size": 4096,
00:05:13.142  "enable_recv_pipe": true,
00:05:13.142  "enable_quickack": false,
00:05:13.142  "enable_placement_id": 0,
00:05:13.142  "enable_zerocopy_send_server": true,
00:05:13.142  "enable_zerocopy_send_client": false,
00:05:13.142  "zerocopy_threshold": 0,
00:05:13.142  "tls_version": 0,
00:05:13.142  "enable_ktls": false
00:05:13.142  }
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "method": "sock_impl_set_options",
00:05:13.142  "params": {
00:05:13.142  "impl_name": "posix",
00:05:13.142  "recv_buf_size": 2097152,
00:05:13.142  "send_buf_size": 2097152,
00:05:13.142  "enable_recv_pipe": true,
00:05:13.142  "enable_quickack": false,
00:05:13.142  "enable_placement_id": 0,
00:05:13.142  "enable_zerocopy_send_server": true,
00:05:13.142  "enable_zerocopy_send_client": false,
00:05:13.142  "zerocopy_threshold": 0,
00:05:13.142  "tls_version": 0,
00:05:13.142  "enable_ktls": false
00:05:13.142  }
00:05:13.142  }
00:05:13.142  ]
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "vmd",
00:05:13.142  "config": []
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "accel",
00:05:13.142  "config": [
00:05:13.142  {
00:05:13.142  "method": "accel_set_options",
00:05:13.142  "params": {
00:05:13.142  "small_cache_size": 128,
00:05:13.142  "large_cache_size": 16,
00:05:13.142  "task_count": 2048,
00:05:13.142  "sequence_count": 2048,
00:05:13.142  "buf_count": 2048
00:05:13.142  }
00:05:13.142  }
00:05:13.142  ]
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "subsystem": "bdev",
00:05:13.142  "config": [
00:05:13.142  {
00:05:13.142  "method": "bdev_set_options",
00:05:13.142  "params": {
00:05:13.142  "bdev_io_pool_size": 65535,
00:05:13.142  "bdev_io_cache_size": 256,
00:05:13.142  "bdev_auto_examine": true,
00:05:13.142  "iobuf_small_cache_size": 128,
00:05:13.142  "iobuf_large_cache_size": 16
00:05:13.142  }
00:05:13.142  },
00:05:13.142  {
00:05:13.142  "method": "bdev_raid_set_options",
00:05:13.142  "params": {
00:05:13.142  "process_window_size_kb": 1024,
00:05:13.142  "process_max_bandwidth_mb_sec": 0
00:05:13.142  }
00:05:13.142  },
00:05:13.142  {
00:05:13.143  "method": "bdev_iscsi_set_options",
00:05:13.143  "params": {
00:05:13.143  "timeout_sec": 30
00:05:13.143  }
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "method": "bdev_nvme_set_options",
00:05:13.143  "params": {
00:05:13.143  "action_on_timeout": "none",
00:05:13.143  "timeout_us": 0,
00:05:13.143  "timeout_admin_us": 0,
00:05:13.143  "keep_alive_timeout_ms": 10000,
00:05:13.143  "arbitration_burst": 0,
00:05:13.143  "low_priority_weight": 0,
00:05:13.143  "medium_priority_weight": 0,
00:05:13.143  "high_priority_weight": 0,
00:05:13.143  "nvme_adminq_poll_period_us": 10000,
00:05:13.143  "nvme_ioq_poll_period_us": 0,
00:05:13.143  "io_queue_requests": 0,
00:05:13.143  "delay_cmd_submit": true,
00:05:13.143  "transport_retry_count": 4,
00:05:13.143  "bdev_retry_count": 3,
00:05:13.143  "transport_ack_timeout": 0,
00:05:13.143  "ctrlr_loss_timeout_sec": 0,
00:05:13.143  "reconnect_delay_sec": 0,
00:05:13.143  "fast_io_fail_timeout_sec": 0,
00:05:13.143  "disable_auto_failback": false,
00:05:13.143  "generate_uuids": false,
00:05:13.143  "transport_tos": 0,
00:05:13.143  "nvme_error_stat": false,
00:05:13.143  "rdma_srq_size": 0,
00:05:13.143  "io_path_stat": false,
00:05:13.143  "allow_accel_sequence": false,
00:05:13.143  "rdma_max_cq_size": 0,
00:05:13.143  "rdma_cm_event_timeout_ms": 0,
00:05:13.143  "dhchap_digests": [
00:05:13.143  "sha256",
00:05:13.143  "sha384",
00:05:13.143  "sha512"
00:05:13.143  ],
00:05:13.143  "dhchap_dhgroups": [
00:05:13.143  "null",
00:05:13.143  "ffdhe2048",
00:05:13.143  "ffdhe3072",
00:05:13.143  "ffdhe4096",
00:05:13.143  "ffdhe6144",
00:05:13.143  "ffdhe8192"
00:05:13.143  ]
00:05:13.143  }
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "method": "bdev_nvme_set_hotplug",
00:05:13.143  "params": {
00:05:13.143  "period_us": 100000,
00:05:13.143  "enable": false
00:05:13.143  }
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "method": "bdev_wait_for_examine"
00:05:13.143  }
00:05:13.143  ]
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "scsi",
00:05:13.143  "config": null
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "scheduler",
00:05:13.143  "config": [
00:05:13.143  {
00:05:13.143  "method": "framework_set_scheduler",
00:05:13.143  "params": {
00:05:13.143  "name": "static"
00:05:13.143  }
00:05:13.143  }
00:05:13.143  ]
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "vhost_scsi",
00:05:13.143  "config": []
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "vhost_blk",
00:05:13.143  "config": []
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "ublk",
00:05:13.143  "config": []
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "nbd",
00:05:13.143  "config": []
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "nvmf",
00:05:13.143  "config": [
00:05:13.143  {
00:05:13.143  "method": "nvmf_set_config",
00:05:13.143  "params": {
00:05:13.143  "discovery_filter": "match_any",
00:05:13.143  "admin_cmd_passthru": {
00:05:13.143  "identify_ctrlr": false
00:05:13.143  },
00:05:13.143  "dhchap_digests": [
00:05:13.143  "sha256",
00:05:13.143  "sha384",
00:05:13.143  "sha512"
00:05:13.143  ],
00:05:13.143  "dhchap_dhgroups": [
00:05:13.143  "null",
00:05:13.143  "ffdhe2048",
00:05:13.143  "ffdhe3072",
00:05:13.143  "ffdhe4096",
00:05:13.143  "ffdhe6144",
00:05:13.143  "ffdhe8192"
00:05:13.143  ]
00:05:13.143  }
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "method": "nvmf_set_max_subsystems",
00:05:13.143  "params": {
00:05:13.143  "max_subsystems": 1024
00:05:13.143  }
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "method": "nvmf_set_crdt",
00:05:13.143  "params": {
00:05:13.143  "crdt1": 0,
00:05:13.143  "crdt2": 0,
00:05:13.143  "crdt3": 0
00:05:13.143  }
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "method": "nvmf_create_transport",
00:05:13.143  "params": {
00:05:13.143  "trtype": "TCP",
00:05:13.143  "max_queue_depth": 128,
00:05:13.143  "max_io_qpairs_per_ctrlr": 127,
00:05:13.143  "in_capsule_data_size": 4096,
00:05:13.143  "max_io_size": 131072,
00:05:13.143  "io_unit_size": 131072,
00:05:13.143  "max_aq_depth": 128,
00:05:13.143  "num_shared_buffers": 511,
00:05:13.143  "buf_cache_size": 4294967295,
00:05:13.143  "dif_insert_or_strip": false,
00:05:13.143  "zcopy": false,
00:05:13.143  "c2h_success": true,
00:05:13.143  "sock_priority": 0,
00:05:13.143  "abort_timeout_sec": 1,
00:05:13.143  "ack_timeout": 0,
00:05:13.143  "data_wr_pool_size": 0
00:05:13.143  }
00:05:13.143  }
00:05:13.143  ]
00:05:13.143  },
00:05:13.143  {
00:05:13.143  "subsystem": "iscsi",
00:05:13.143  "config": [
00:05:13.143  {
00:05:13.143  "method": "iscsi_set_options",
00:05:13.143  "params": {
00:05:13.143  "node_base": "iqn.2016-06.io.spdk",
00:05:13.143  "max_sessions": 128,
00:05:13.143  "max_connections_per_session": 2,
00:05:13.143  "max_queue_depth": 64,
00:05:13.143  "default_time2wait": 2,
00:05:13.143  "default_time2retain": 20,
00:05:13.143  "first_burst_length": 8192,
00:05:13.143  "immediate_data": true,
00:05:13.143  "allow_duplicated_isid": false,
00:05:13.143  "error_recovery_level": 0,
00:05:13.143  "nop_timeout": 60,
00:05:13.143  "nop_in_interval": 30,
00:05:13.143  "disable_chap": false,
00:05:13.143  "require_chap": false,
00:05:13.143  "mutual_chap": false,
00:05:13.143  "chap_group": 0,
00:05:13.143  "max_large_datain_per_connection": 64,
00:05:13.143  "max_r2t_per_connection": 4,
00:05:13.143  "pdu_pool_size": 36864,
00:05:13.143  "immediate_data_pool_size": 16384,
00:05:13.143  "data_out_pool_size": 2048
00:05:13.143  }
00:05:13.143  }
00:05:13.143  ]
00:05:13.143  }
00:05:13.143  ]
00:05:13.143  }
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105331
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105331 ']'
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105331
00:05:13.143    10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:13.143    10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105331
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105331'
00:05:13.143  killing process with pid 105331
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105331
00:05:13.143   10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105331
00:05:15.052   10:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105954
00:05:15.052   10:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:15.052   10:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:05:20.327   10:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105954
00:05:20.327   10:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105954 ']'
00:05:20.327   10:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105954
00:05:20.327    10:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:05:20.327   10:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:20.327    10:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105954
00:05:20.327   10:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:20.327   10:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:20.327   10:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105954'
00:05:20.327  killing process with pid 105954
00:05:20.327   10:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105954
00:05:20.327   10:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105954
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt
00:05:22.238  
00:05:22.238  real	0m10.151s
00:05:22.238  user	0m9.663s
00:05:22.238  sys	0m1.016s
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:05:22.238  ************************************
00:05:22.238  END TEST skip_rpc_with_json
00:05:22.238  ************************************
00:05:22.238   10:55:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:05:22.238   10:55:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:22.238   10:55:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:22.238   10:55:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:22.238  ************************************
00:05:22.238  START TEST skip_rpc_with_delay
00:05:22.238  ************************************
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:22.238    10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:22.238    10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:05:22.238   10:55:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:05:22.238  [2024-12-09 10:55:39.068840] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:05:22.238   10:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:05:22.238   10:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:22.238   10:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:22.238   10:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:22.238  
00:05:22.238  real	0m0.148s
00:05:22.238  user	0m0.082s
00:05:22.238  sys	0m0.065s
00:05:22.238   10:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:22.238   10:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:05:22.238  ************************************
00:05:22.238  END TEST skip_rpc_with_delay
00:05:22.238  ************************************
00:05:22.238    10:55:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:05:22.238   10:55:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:05:22.238   10:55:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:05:22.238   10:55:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:22.238   10:55:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:22.238   10:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:22.238  ************************************
00:05:22.238  START TEST exit_on_failed_rpc_init
00:05:22.238  ************************************
00:05:22.238   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:05:22.238   10:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=107246
00:05:22.238   10:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:05:22.238   10:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 107246
00:05:22.239   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 107246 ']'
00:05:22.239   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:22.239   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:22.239   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:22.239  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:22.239   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:22.239   10:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:05:22.498  [2024-12-09 10:55:39.251509] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:22.498  [2024-12-09 10:55:39.251610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107246 ]
00:05:22.498  [2024-12-09 10:55:39.368594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:22.498  [2024-12-09 10:55:39.476491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:23.438    10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:23.438    10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:05:23.438   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:05:23.438  [2024-12-09 10:55:40.321088] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:23.438  [2024-12-09 10:55:40.321188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107457 ]
00:05:23.698  [2024-12-09 10:55:40.455568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:23.698  [2024-12-09 10:55:40.577209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:23.698  [2024-12-09 10:55:40.577336] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:05:23.698  [2024-12-09 10:55:40.577362] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:05:23.698  [2024-12-09 10:55:40.577375] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 107246
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 107246 ']'
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 107246
00:05:23.958    10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:23.958    10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107246
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107246'
00:05:23.958  killing process with pid 107246
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 107246
00:05:23.958   10:55:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 107246
00:05:25.866  
00:05:25.866  real	0m3.707s
00:05:25.866  user	0m4.131s
00:05:25.866  sys	0m0.700s
00:05:25.866   10:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:25.866   10:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:05:25.866  ************************************
00:05:25.866  END TEST exit_on_failed_rpc_init
00:05:25.866  ************************************
00:05:26.125   10:55:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json
00:05:26.125  
00:05:26.125  real	0m21.235s
00:05:26.125  user	0m20.501s
00:05:26.125  sys	0m2.418s
00:05:26.125   10:55:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:26.125   10:55:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:26.125  ************************************
00:05:26.125  END TEST skip_rpc
00:05:26.125  ************************************
00:05:26.125   10:55:42  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:05:26.125   10:55:42  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:26.125   10:55:42  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:26.125   10:55:42  -- common/autotest_common.sh@10 -- # set +x
00:05:26.125  ************************************
00:05:26.125  START TEST rpc_client
00:05:26.125  ************************************
00:05:26.125   10:55:42 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:05:26.125  * Looking for test storage...
00:05:26.125  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client
00:05:26.125    10:55:42 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:26.126     10:55:42 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:05:26.126     10:55:42 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:26.126    10:55:43 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@345 -- # : 1
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@353 -- # local d=1
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@355 -- # echo 1
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@353 -- # local d=2
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:26.126     10:55:43 rpc_client -- scripts/common.sh@355 -- # echo 2
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:26.126    10:55:43 rpc_client -- scripts/common.sh@368 -- # return 0
00:05:26.126    10:55:43 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:26.126    10:55:43 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:26.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.126  		--rc genhtml_branch_coverage=1
00:05:26.126  		--rc genhtml_function_coverage=1
00:05:26.126  		--rc genhtml_legend=1
00:05:26.126  		--rc geninfo_all_blocks=1
00:05:26.126  		--rc geninfo_unexecuted_blocks=1
00:05:26.126  		
00:05:26.126  		'
00:05:26.126    10:55:43 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:26.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.126  		--rc genhtml_branch_coverage=1
00:05:26.126  		--rc genhtml_function_coverage=1
00:05:26.126  		--rc genhtml_legend=1
00:05:26.126  		--rc geninfo_all_blocks=1
00:05:26.126  		--rc geninfo_unexecuted_blocks=1
00:05:26.126  		
00:05:26.126  		'
00:05:26.126    10:55:43 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:26.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.126  		--rc genhtml_branch_coverage=1
00:05:26.126  		--rc genhtml_function_coverage=1
00:05:26.126  		--rc genhtml_legend=1
00:05:26.126  		--rc geninfo_all_blocks=1
00:05:26.126  		--rc geninfo_unexecuted_blocks=1
00:05:26.126  		
00:05:26.126  		'
00:05:26.126    10:55:43 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:26.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.126  		--rc genhtml_branch_coverage=1
00:05:26.126  		--rc genhtml_function_coverage=1
00:05:26.126  		--rc genhtml_legend=1
00:05:26.126  		--rc geninfo_all_blocks=1
00:05:26.126  		--rc geninfo_unexecuted_blocks=1
00:05:26.126  		
00:05:26.126  		'
00:05:26.126   10:55:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:05:26.126  OK
00:05:26.126   10:55:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:05:26.126  
00:05:26.126  real	0m0.169s
00:05:26.126  user	0m0.104s
00:05:26.126  sys	0m0.074s
00:05:26.126   10:55:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:26.126   10:55:43 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:05:26.126  ************************************
00:05:26.126  END TEST rpc_client
00:05:26.126  ************************************
00:05:26.126   10:55:43  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:05:26.126   10:55:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:26.126   10:55:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:26.126   10:55:43  -- common/autotest_common.sh@10 -- # set +x
00:05:26.386  ************************************
00:05:26.386  START TEST json_config
00:05:26.386  ************************************
00:05:26.386   10:55:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:26.386     10:55:43 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:05:26.386     10:55:43 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:26.386    10:55:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:26.386    10:55:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:26.386    10:55:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:26.386    10:55:43 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:05:26.386    10:55:43 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:05:26.386    10:55:43 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:05:26.386    10:55:43 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:05:26.386    10:55:43 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:05:26.386    10:55:43 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:05:26.386    10:55:43 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:05:26.386    10:55:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:26.386    10:55:43 json_config -- scripts/common.sh@344 -- # case "$op" in
00:05:26.386    10:55:43 json_config -- scripts/common.sh@345 -- # : 1
00:05:26.386    10:55:43 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:26.386    10:55:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:26.386     10:55:43 json_config -- scripts/common.sh@365 -- # decimal 1
00:05:26.386     10:55:43 json_config -- scripts/common.sh@353 -- # local d=1
00:05:26.386     10:55:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:26.386     10:55:43 json_config -- scripts/common.sh@355 -- # echo 1
00:05:26.386    10:55:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:05:26.386     10:55:43 json_config -- scripts/common.sh@366 -- # decimal 2
00:05:26.386     10:55:43 json_config -- scripts/common.sh@353 -- # local d=2
00:05:26.386     10:55:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:26.386     10:55:43 json_config -- scripts/common.sh@355 -- # echo 2
00:05:26.386    10:55:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:05:26.386    10:55:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:26.386    10:55:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:26.386    10:55:43 json_config -- scripts/common.sh@368 -- # return 0
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:26.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.386  		--rc genhtml_branch_coverage=1
00:05:26.386  		--rc genhtml_function_coverage=1
00:05:26.386  		--rc genhtml_legend=1
00:05:26.386  		--rc geninfo_all_blocks=1
00:05:26.386  		--rc geninfo_unexecuted_blocks=1
00:05:26.386  		
00:05:26.386  		'
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:26.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.386  		--rc genhtml_branch_coverage=1
00:05:26.386  		--rc genhtml_function_coverage=1
00:05:26.386  		--rc genhtml_legend=1
00:05:26.386  		--rc geninfo_all_blocks=1
00:05:26.386  		--rc geninfo_unexecuted_blocks=1
00:05:26.386  		
00:05:26.386  		'
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:26.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.386  		--rc genhtml_branch_coverage=1
00:05:26.386  		--rc genhtml_function_coverage=1
00:05:26.386  		--rc genhtml_legend=1
00:05:26.386  		--rc geninfo_all_blocks=1
00:05:26.386  		--rc geninfo_unexecuted_blocks=1
00:05:26.386  		
00:05:26.386  		'
00:05:26.386    10:55:43 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:26.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.386  		--rc genhtml_branch_coverage=1
00:05:26.386  		--rc genhtml_function_coverage=1
00:05:26.386  		--rc genhtml_legend=1
00:05:26.386  		--rc geninfo_all_blocks=1
00:05:26.386  		--rc geninfo_unexecuted_blocks=1
00:05:26.386  		
00:05:26.386  		'
00:05:26.386   10:55:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:05:26.386     10:55:43 json_config -- nvmf/common.sh@7 -- # uname -s
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:26.386     10:55:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:05:26.386    10:55:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:05:26.387     10:55:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:05:26.387     10:55:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:26.387     10:55:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:26.387     10:55:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:26.387      10:55:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.387      10:55:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.387      10:55:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.387      10:55:43 json_config -- paths/export.sh@5 -- # export PATH
00:05:26.387      10:55:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@51 -- # : 0
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:26.387  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:26.387    10:55:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:05:26.387  WARNING: No tests are enabled so not running JSON configuration tests
00:05:26.387   10:55:43 json_config -- json_config/json_config.sh@28 -- # exit 0
00:05:26.387  
00:05:26.387  real	0m0.138s
00:05:26.387  user	0m0.097s
00:05:26.387  sys	0m0.043s
00:05:26.387   10:55:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:26.387   10:55:43 json_config -- common/autotest_common.sh@10 -- # set +x
00:05:26.387  ************************************
00:05:26.387  END TEST json_config
00:05:26.387  ************************************
00:05:26.387   10:55:43  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:05:26.387   10:55:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:26.387   10:55:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:26.387   10:55:43  -- common/autotest_common.sh@10 -- # set +x
00:05:26.387  ************************************
00:05:26.387  START TEST json_config_extra_key
00:05:26.387  ************************************
00:05:26.387   10:55:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:05:26.387    10:55:43 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:26.387     10:55:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:26.387     10:55:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:05:26.646    10:55:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:26.646     10:55:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:26.646    10:55:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:05:26.646    10:55:43 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:26.646    10:55:43 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:26.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.646  		--rc genhtml_branch_coverage=1
00:05:26.646  		--rc genhtml_function_coverage=1
00:05:26.646  		--rc genhtml_legend=1
00:05:26.646  		--rc geninfo_all_blocks=1
00:05:26.646  		--rc geninfo_unexecuted_blocks=1
00:05:26.646  		
00:05:26.646  		'
00:05:26.646    10:55:43 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:26.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.646  		--rc genhtml_branch_coverage=1
00:05:26.646  		--rc genhtml_function_coverage=1
00:05:26.646  		--rc genhtml_legend=1
00:05:26.646  		--rc geninfo_all_blocks=1
00:05:26.646  		--rc geninfo_unexecuted_blocks=1
00:05:26.646  		
00:05:26.646  		'
00:05:26.646    10:55:43 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:26.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.646  		--rc genhtml_branch_coverage=1
00:05:26.646  		--rc genhtml_function_coverage=1
00:05:26.646  		--rc genhtml_legend=1
00:05:26.646  		--rc geninfo_all_blocks=1
00:05:26.646  		--rc geninfo_unexecuted_blocks=1
00:05:26.646  		
00:05:26.646  		'
00:05:26.646    10:55:43 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:26.646  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:26.646  		--rc genhtml_branch_coverage=1
00:05:26.646  		--rc genhtml_function_coverage=1
00:05:26.646  		--rc genhtml_legend=1
00:05:26.646  		--rc geninfo_all_blocks=1
00:05:26.646  		--rc geninfo_unexecuted_blocks=1
00:05:26.646  		
00:05:26.646  		'
00:05:26.646   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh
00:05:26.646     10:55:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:26.646     10:55:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:26.646    10:55:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:808ec059-55a7-e511-906e-0012795d96dd
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=808ec059-55a7-e511-906e-0012795d96dd
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh
00:05:26.647     10:55:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:05:26.647     10:55:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:26.647     10:55:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:26.647     10:55:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:26.647      10:55:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.647      10:55:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.647      10:55:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.647      10:55:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:05:26.647      10:55:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:26.647  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:26.647    10:55:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json')
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:05:26.647  INFO: launching applications...
00:05:26.647   10:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=108240
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:05:26.647  Waiting for target to run...
00:05:26.647   10:55:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 108240 /var/tmp/spdk_tgt.sock
00:05:26.647   10:55:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 108240 ']'
00:05:26.647   10:55:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:05:26.647   10:55:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:26.647   10:55:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:05:26.647  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:05:26.647   10:55:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:26.647   10:55:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:26.647  [2024-12-09 10:55:43.525859] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:26.647  [2024-12-09 10:55:43.525959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108240 ]
00:05:27.215  [2024-12-09 10:55:43.991478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:27.215  [2024-12-09 10:55:44.088624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:27.783   10:55:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:27.783   10:55:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:05:27.783  
00:05:27.783   10:55:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:05:27.783  INFO: shutting down applications...
00:05:27.783   10:55:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 108240 ]]
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 108240
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 108240
00:05:27.783   10:55:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:28.352   10:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:28.352   10:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:28.352   10:55:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 108240
00:05:28.352   10:55:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:28.921   10:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:28.921   10:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:28.921   10:55:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 108240
00:05:28.921   10:55:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:29.180   10:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:29.180   10:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:29.180   10:55:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 108240
00:05:29.180   10:55:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:29.749   10:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:29.749   10:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:29.749   10:55:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 108240
00:05:29.749   10:55:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 108240
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@43 -- # break
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:05:30.317   10:55:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:05:30.317  SPDK target shutdown done
00:05:30.317   10:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:05:30.317  Success
00:05:30.317  
00:05:30.317  real	0m3.825s
00:05:30.317  user	0m3.329s
00:05:30.317  sys	0m0.648s
00:05:30.317   10:55:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:30.317   10:55:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:30.317  ************************************
00:05:30.317  END TEST json_config_extra_key
00:05:30.317  ************************************
00:05:30.317   10:55:47  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:30.317   10:55:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:30.317   10:55:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:30.317   10:55:47  -- common/autotest_common.sh@10 -- # set +x
00:05:30.317  ************************************
00:05:30.317  START TEST alias_rpc
00:05:30.317  ************************************
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:30.318  * Looking for test storage...
00:05:30.318  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:30.318     10:55:47 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:05:30.318     10:55:47 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@345 -- # : 1
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:30.318     10:55:47 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:30.318    10:55:47 alias_rpc -- scripts/common.sh@368 -- # return 0
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:30.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:30.318  		--rc genhtml_branch_coverage=1
00:05:30.318  		--rc genhtml_function_coverage=1
00:05:30.318  		--rc genhtml_legend=1
00:05:30.318  		--rc geninfo_all_blocks=1
00:05:30.318  		--rc geninfo_unexecuted_blocks=1
00:05:30.318  		
00:05:30.318  		'
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:30.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:30.318  		--rc genhtml_branch_coverage=1
00:05:30.318  		--rc genhtml_function_coverage=1
00:05:30.318  		--rc genhtml_legend=1
00:05:30.318  		--rc geninfo_all_blocks=1
00:05:30.318  		--rc geninfo_unexecuted_blocks=1
00:05:30.318  		
00:05:30.318  		'
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:30.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:30.318  		--rc genhtml_branch_coverage=1
00:05:30.318  		--rc genhtml_function_coverage=1
00:05:30.318  		--rc genhtml_legend=1
00:05:30.318  		--rc geninfo_all_blocks=1
00:05:30.318  		--rc geninfo_unexecuted_blocks=1
00:05:30.318  		
00:05:30.318  		'
00:05:30.318    10:55:47 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:30.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:30.318  		--rc genhtml_branch_coverage=1
00:05:30.318  		--rc genhtml_function_coverage=1
00:05:30.318  		--rc genhtml_legend=1
00:05:30.318  		--rc geninfo_all_blocks=1
00:05:30.318  		--rc geninfo_unexecuted_blocks=1
00:05:30.318  		
00:05:30.318  		'
00:05:30.318   10:55:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:05:30.318   10:55:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:30.318   10:55:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=108928
00:05:30.318   10:55:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 108928
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 108928 ']'
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:30.318  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:30.318   10:55:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:30.578  [2024-12-09 10:55:47.414086] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:30.578  [2024-12-09 10:55:47.414191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108928 ]
00:05:30.578  [2024-12-09 10:55:47.535182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:30.837  [2024-12-09 10:55:47.637322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:31.406   10:55:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:31.407   10:55:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:31.407   10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py load_config -i
00:05:31.666   10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 108928
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 108928 ']'
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 108928
00:05:31.666    10:55:48 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:31.666    10:55:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108928
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108928'
00:05:31.666  killing process with pid 108928
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 108928
00:05:31.666   10:55:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 108928
00:05:33.577  
00:05:33.577  real	0m3.370s
00:05:33.577  user	0m3.381s
00:05:33.577  sys	0m0.623s
00:05:33.577   10:55:50 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:33.577   10:55:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:33.577  ************************************
00:05:33.577  END TEST alias_rpc
00:05:33.577  ************************************
00:05:33.837   10:55:50  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:05:33.837   10:55:50  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:05:33.837   10:55:50  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:33.837   10:55:50  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:33.837   10:55:50  -- common/autotest_common.sh@10 -- # set +x
00:05:33.837  ************************************
00:05:33.837  START TEST spdkcli_tcp
00:05:33.837  ************************************
00:05:33.837   10:55:50 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh
00:05:33.837  * Looking for test storage...
00:05:33.837  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:33.837     10:55:50 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:05:33.837     10:55:50 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:33.837     10:55:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:33.837    10:55:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:33.837  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.837  		--rc genhtml_branch_coverage=1
00:05:33.837  		--rc genhtml_function_coverage=1
00:05:33.837  		--rc genhtml_legend=1
00:05:33.837  		--rc geninfo_all_blocks=1
00:05:33.837  		--rc geninfo_unexecuted_blocks=1
00:05:33.837  		
00:05:33.837  		'
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:33.837  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.837  		--rc genhtml_branch_coverage=1
00:05:33.837  		--rc genhtml_function_coverage=1
00:05:33.837  		--rc genhtml_legend=1
00:05:33.837  		--rc geninfo_all_blocks=1
00:05:33.837  		--rc geninfo_unexecuted_blocks=1
00:05:33.837  		
00:05:33.837  		'
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:33.837  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.837  		--rc genhtml_branch_coverage=1
00:05:33.837  		--rc genhtml_function_coverage=1
00:05:33.837  		--rc genhtml_legend=1
00:05:33.837  		--rc geninfo_all_blocks=1
00:05:33.837  		--rc geninfo_unexecuted_blocks=1
00:05:33.837  		
00:05:33.837  		'
00:05:33.837    10:55:50 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:33.837  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:33.837  		--rc genhtml_branch_coverage=1
00:05:33.837  		--rc genhtml_function_coverage=1
00:05:33.837  		--rc genhtml_legend=1
00:05:33.837  		--rc geninfo_all_blocks=1
00:05:33.837  		--rc geninfo_unexecuted_blocks=1
00:05:33.837  		
00:05:33.837  		'
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/common.sh
00:05:33.838    10:55:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:05:33.838    10:55:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/clear_config.py
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=109604
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:05:33.838   10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 109604
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 109604 ']'
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:33.838  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:33.838   10:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:34.097  [2024-12-09 10:55:50.856895] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:34.097  [2024-12-09 10:55:50.857018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109604 ]
00:05:34.097  [2024-12-09 10:55:50.972085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:34.097  [2024-12-09 10:55:51.075468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:34.097  [2024-12-09 10:55:51.075486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:35.037   10:55:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:35.037   10:55:51 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:05:35.037   10:55:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=109814
00:05:35.037   10:55:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:05:35.037   10:55:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:05:35.037  [
00:05:35.037    "bdev_malloc_delete",
00:05:35.037    "bdev_malloc_create",
00:05:35.037    "bdev_null_resize",
00:05:35.037    "bdev_null_delete",
00:05:35.037    "bdev_null_create",
00:05:35.037    "bdev_nvme_cuse_unregister",
00:05:35.037    "bdev_nvme_cuse_register",
00:05:35.037    "bdev_opal_new_user",
00:05:35.037    "bdev_opal_set_lock_state",
00:05:35.037    "bdev_opal_delete",
00:05:35.037    "bdev_opal_get_info",
00:05:35.037    "bdev_opal_create",
00:05:35.037    "bdev_nvme_opal_revert",
00:05:35.037    "bdev_nvme_opal_init",
00:05:35.037    "bdev_nvme_send_cmd",
00:05:35.037    "bdev_nvme_set_keys",
00:05:35.037    "bdev_nvme_get_path_iostat",
00:05:35.037    "bdev_nvme_get_mdns_discovery_info",
00:05:35.037    "bdev_nvme_stop_mdns_discovery",
00:05:35.037    "bdev_nvme_start_mdns_discovery",
00:05:35.037    "bdev_nvme_set_multipath_policy",
00:05:35.037    "bdev_nvme_set_preferred_path",
00:05:35.037    "bdev_nvme_get_io_paths",
00:05:35.037    "bdev_nvme_remove_error_injection",
00:05:35.037    "bdev_nvme_add_error_injection",
00:05:35.037    "bdev_nvme_get_discovery_info",
00:05:35.037    "bdev_nvme_stop_discovery",
00:05:35.037    "bdev_nvme_start_discovery",
00:05:35.037    "bdev_nvme_get_controller_health_info",
00:05:35.037    "bdev_nvme_disable_controller",
00:05:35.037    "bdev_nvme_enable_controller",
00:05:35.037    "bdev_nvme_reset_controller",
00:05:35.037    "bdev_nvme_get_transport_statistics",
00:05:35.037    "bdev_nvme_apply_firmware",
00:05:35.037    "bdev_nvme_detach_controller",
00:05:35.037    "bdev_nvme_get_controllers",
00:05:35.037    "bdev_nvme_attach_controller",
00:05:35.037    "bdev_nvme_set_hotplug",
00:05:35.037    "bdev_nvme_set_options",
00:05:35.037    "bdev_passthru_delete",
00:05:35.037    "bdev_passthru_create",
00:05:35.037    "bdev_lvol_set_parent_bdev",
00:05:35.037    "bdev_lvol_set_parent",
00:05:35.037    "bdev_lvol_check_shallow_copy",
00:05:35.037    "bdev_lvol_start_shallow_copy",
00:05:35.037    "bdev_lvol_grow_lvstore",
00:05:35.037    "bdev_lvol_get_lvols",
00:05:35.037    "bdev_lvol_get_lvstores",
00:05:35.037    "bdev_lvol_delete",
00:05:35.037    "bdev_lvol_set_read_only",
00:05:35.037    "bdev_lvol_resize",
00:05:35.037    "bdev_lvol_decouple_parent",
00:05:35.037    "bdev_lvol_inflate",
00:05:35.037    "bdev_lvol_rename",
00:05:35.037    "bdev_lvol_clone_bdev",
00:05:35.037    "bdev_lvol_clone",
00:05:35.037    "bdev_lvol_snapshot",
00:05:35.037    "bdev_lvol_create",
00:05:35.037    "bdev_lvol_delete_lvstore",
00:05:35.037    "bdev_lvol_rename_lvstore",
00:05:35.037    "bdev_lvol_create_lvstore",
00:05:35.037    "bdev_raid_set_options",
00:05:35.037    "bdev_raid_remove_base_bdev",
00:05:35.037    "bdev_raid_add_base_bdev",
00:05:35.037    "bdev_raid_delete",
00:05:35.037    "bdev_raid_create",
00:05:35.037    "bdev_raid_get_bdevs",
00:05:35.037    "bdev_error_inject_error",
00:05:35.037    "bdev_error_delete",
00:05:35.037    "bdev_error_create",
00:05:35.037    "bdev_split_delete",
00:05:35.037    "bdev_split_create",
00:05:35.037    "bdev_delay_delete",
00:05:35.037    "bdev_delay_create",
00:05:35.037    "bdev_delay_update_latency",
00:05:35.037    "bdev_zone_block_delete",
00:05:35.037    "bdev_zone_block_create",
00:05:35.037    "blobfs_create",
00:05:35.037    "blobfs_detect",
00:05:35.037    "blobfs_set_cache_size",
00:05:35.037    "bdev_crypto_delete",
00:05:35.037    "bdev_crypto_create",
00:05:35.037    "bdev_aio_delete",
00:05:35.037    "bdev_aio_rescan",
00:05:35.037    "bdev_aio_create",
00:05:35.037    "bdev_ftl_set_property",
00:05:35.037    "bdev_ftl_get_properties",
00:05:35.037    "bdev_ftl_get_stats",
00:05:35.037    "bdev_ftl_unmap",
00:05:35.037    "bdev_ftl_unload",
00:05:35.037    "bdev_ftl_delete",
00:05:35.037    "bdev_ftl_load",
00:05:35.037    "bdev_ftl_create",
00:05:35.037    "bdev_virtio_attach_controller",
00:05:35.037    "bdev_virtio_scsi_get_devices",
00:05:35.037    "bdev_virtio_detach_controller",
00:05:35.037    "bdev_virtio_blk_set_hotplug",
00:05:35.037    "bdev_iscsi_delete",
00:05:35.037    "bdev_iscsi_create",
00:05:35.037    "bdev_iscsi_set_options",
00:05:35.037    "accel_error_inject_error",
00:05:35.037    "ioat_scan_accel_module",
00:05:35.037    "dsa_scan_accel_module",
00:05:35.037    "iaa_scan_accel_module",
00:05:35.037    "dpdk_cryptodev_get_driver",
00:05:35.037    "dpdk_cryptodev_set_driver",
00:05:35.037    "dpdk_cryptodev_scan_accel_module",
00:05:35.037    "vfu_virtio_create_fs_endpoint",
00:05:35.037    "vfu_virtio_create_scsi_endpoint",
00:05:35.037    "vfu_virtio_scsi_remove_target",
00:05:35.037    "vfu_virtio_scsi_add_target",
00:05:35.037    "vfu_virtio_create_blk_endpoint",
00:05:35.037    "vfu_virtio_delete_endpoint",
00:05:35.038    "keyring_file_remove_key",
00:05:35.038    "keyring_file_add_key",
00:05:35.038    "keyring_linux_set_options",
00:05:35.038    "fsdev_aio_delete",
00:05:35.038    "fsdev_aio_create",
00:05:35.038    "iscsi_get_histogram",
00:05:35.038    "iscsi_enable_histogram",
00:05:35.038    "iscsi_set_options",
00:05:35.038    "iscsi_get_auth_groups",
00:05:35.038    "iscsi_auth_group_remove_secret",
00:05:35.038    "iscsi_auth_group_add_secret",
00:05:35.038    "iscsi_delete_auth_group",
00:05:35.038    "iscsi_create_auth_group",
00:05:35.038    "iscsi_set_discovery_auth",
00:05:35.038    "iscsi_get_options",
00:05:35.038    "iscsi_target_node_request_logout",
00:05:35.038    "iscsi_target_node_set_redirect",
00:05:35.038    "iscsi_target_node_set_auth",
00:05:35.038    "iscsi_target_node_add_lun",
00:05:35.038    "iscsi_get_stats",
00:05:35.038    "iscsi_get_connections",
00:05:35.038    "iscsi_portal_group_set_auth",
00:05:35.038    "iscsi_start_portal_group",
00:05:35.038    "iscsi_delete_portal_group",
00:05:35.038    "iscsi_create_portal_group",
00:05:35.038    "iscsi_get_portal_groups",
00:05:35.038    "iscsi_delete_target_node",
00:05:35.038    "iscsi_target_node_remove_pg_ig_maps",
00:05:35.038    "iscsi_target_node_add_pg_ig_maps",
00:05:35.038    "iscsi_create_target_node",
00:05:35.038    "iscsi_get_target_nodes",
00:05:35.038    "iscsi_delete_initiator_group",
00:05:35.038    "iscsi_initiator_group_remove_initiators",
00:05:35.038    "iscsi_initiator_group_add_initiators",
00:05:35.038    "iscsi_create_initiator_group",
00:05:35.038    "iscsi_get_initiator_groups",
00:05:35.038    "nvmf_set_crdt",
00:05:35.038    "nvmf_set_config",
00:05:35.038    "nvmf_set_max_subsystems",
00:05:35.038    "nvmf_stop_mdns_prr",
00:05:35.038    "nvmf_publish_mdns_prr",
00:05:35.038    "nvmf_subsystem_get_listeners",
00:05:35.038    "nvmf_subsystem_get_qpairs",
00:05:35.038    "nvmf_subsystem_get_controllers",
00:05:35.038    "nvmf_get_stats",
00:05:35.038    "nvmf_get_transports",
00:05:35.038    "nvmf_create_transport",
00:05:35.038    "nvmf_get_targets",
00:05:35.038    "nvmf_delete_target",
00:05:35.038    "nvmf_create_target",
00:05:35.038    "nvmf_subsystem_allow_any_host",
00:05:35.038    "nvmf_subsystem_set_keys",
00:05:35.038    "nvmf_subsystem_remove_host",
00:05:35.038    "nvmf_subsystem_add_host",
00:05:35.038    "nvmf_ns_remove_host",
00:05:35.038    "nvmf_ns_add_host",
00:05:35.038    "nvmf_subsystem_remove_ns",
00:05:35.038    "nvmf_subsystem_set_ns_ana_group",
00:05:35.038    "nvmf_subsystem_add_ns",
00:05:35.038    "nvmf_subsystem_listener_set_ana_state",
00:05:35.038    "nvmf_discovery_get_referrals",
00:05:35.038    "nvmf_discovery_remove_referral",
00:05:35.038    "nvmf_discovery_add_referral",
00:05:35.038    "nvmf_subsystem_remove_listener",
00:05:35.038    "nvmf_subsystem_add_listener",
00:05:35.038    "nvmf_delete_subsystem",
00:05:35.038    "nvmf_create_subsystem",
00:05:35.038    "nvmf_get_subsystems",
00:05:35.038    "env_dpdk_get_mem_stats",
00:05:35.038    "nbd_get_disks",
00:05:35.038    "nbd_stop_disk",
00:05:35.038    "nbd_start_disk",
00:05:35.038    "ublk_recover_disk",
00:05:35.038    "ublk_get_disks",
00:05:35.038    "ublk_stop_disk",
00:05:35.038    "ublk_start_disk",
00:05:35.038    "ublk_destroy_target",
00:05:35.038    "ublk_create_target",
00:05:35.038    "virtio_blk_create_transport",
00:05:35.038    "virtio_blk_get_transports",
00:05:35.038    "vhost_controller_set_coalescing",
00:05:35.038    "vhost_get_controllers",
00:05:35.038    "vhost_delete_controller",
00:05:35.038    "vhost_create_blk_controller",
00:05:35.038    "vhost_scsi_controller_remove_target",
00:05:35.038    "vhost_scsi_controller_add_target",
00:05:35.038    "vhost_start_scsi_controller",
00:05:35.038    "vhost_create_scsi_controller",
00:05:35.038    "thread_set_cpumask",
00:05:35.038    "scheduler_set_options",
00:05:35.038    "framework_get_governor",
00:05:35.038    "framework_get_scheduler",
00:05:35.038    "framework_set_scheduler",
00:05:35.038    "framework_get_reactors",
00:05:35.038    "thread_get_io_channels",
00:05:35.038    "thread_get_pollers",
00:05:35.038    "thread_get_stats",
00:05:35.038    "framework_monitor_context_switch",
00:05:35.038    "spdk_kill_instance",
00:05:35.038    "log_enable_timestamps",
00:05:35.038    "log_get_flags",
00:05:35.038    "log_clear_flag",
00:05:35.038    "log_set_flag",
00:05:35.038    "log_get_level",
00:05:35.038    "log_set_level",
00:05:35.038    "log_get_print_level",
00:05:35.038    "log_set_print_level",
00:05:35.038    "framework_enable_cpumask_locks",
00:05:35.038    "framework_disable_cpumask_locks",
00:05:35.038    "framework_wait_init",
00:05:35.038    "framework_start_init",
00:05:35.038    "scsi_get_devices",
00:05:35.038    "bdev_get_histogram",
00:05:35.038    "bdev_enable_histogram",
00:05:35.038    "bdev_set_qos_limit",
00:05:35.038    "bdev_set_qd_sampling_period",
00:05:35.038    "bdev_get_bdevs",
00:05:35.038    "bdev_reset_iostat",
00:05:35.038    "bdev_get_iostat",
00:05:35.038    "bdev_examine",
00:05:35.038    "bdev_wait_for_examine",
00:05:35.038    "bdev_set_options",
00:05:35.038    "accel_get_stats",
00:05:35.038    "accel_set_options",
00:05:35.038    "accel_set_driver",
00:05:35.038    "accel_crypto_key_destroy",
00:05:35.038    "accel_crypto_keys_get",
00:05:35.038    "accel_crypto_key_create",
00:05:35.038    "accel_assign_opc",
00:05:35.038    "accel_get_module_info",
00:05:35.038    "accel_get_opc_assignments",
00:05:35.038    "vmd_rescan",
00:05:35.038    "vmd_remove_device",
00:05:35.038    "vmd_enable",
00:05:35.038    "sock_get_default_impl",
00:05:35.038    "sock_set_default_impl",
00:05:35.038    "sock_impl_set_options",
00:05:35.038    "sock_impl_get_options",
00:05:35.038    "iobuf_get_stats",
00:05:35.038    "iobuf_set_options",
00:05:35.038    "keyring_get_keys",
00:05:35.038    "vfu_tgt_set_base_path",
00:05:35.038    "framework_get_pci_devices",
00:05:35.038    "framework_get_config",
00:05:35.038    "framework_get_subsystems",
00:05:35.038    "fsdev_set_opts",
00:05:35.038    "fsdev_get_opts",
00:05:35.038    "trace_get_info",
00:05:35.038    "trace_get_tpoint_group_mask",
00:05:35.038    "trace_disable_tpoint_group",
00:05:35.038    "trace_enable_tpoint_group",
00:05:35.038    "trace_clear_tpoint_mask",
00:05:35.038    "trace_set_tpoint_mask",
00:05:35.038    "notify_get_notifications",
00:05:35.038    "notify_get_types",
00:05:35.038    "spdk_get_version",
00:05:35.038    "rpc_get_methods"
00:05:35.038  ]
00:05:35.038   10:55:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:05:35.038   10:55:52 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:35.038   10:55:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:35.298   10:55:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:05:35.298   10:55:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 109604
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 109604 ']'
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 109604
00:05:35.298    10:55:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:35.298    10:55:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109604
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109604'
00:05:35.298  killing process with pid 109604
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 109604
00:05:35.298   10:55:52 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 109604
00:05:37.206  
00:05:37.206  real	0m3.411s
00:05:37.206  user	0m6.168s
00:05:37.206  sys	0m0.602s
00:05:37.206   10:55:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:37.206   10:55:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:37.206  ************************************
00:05:37.206  END TEST spdkcli_tcp
00:05:37.206  ************************************
00:05:37.206   10:55:54  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:37.206   10:55:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:37.206   10:55:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:37.206   10:55:54  -- common/autotest_common.sh@10 -- # set +x
00:05:37.206  ************************************
00:05:37.206  START TEST dpdk_mem_utility
00:05:37.206  ************************************
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:37.206  * Looking for test storage...
00:05:37.206  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:37.206     10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:05:37.206     10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:37.206     10:55:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:37.206    10:55:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:37.206  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:37.206  		--rc genhtml_branch_coverage=1
00:05:37.206  		--rc genhtml_function_coverage=1
00:05:37.206  		--rc genhtml_legend=1
00:05:37.206  		--rc geninfo_all_blocks=1
00:05:37.206  		--rc geninfo_unexecuted_blocks=1
00:05:37.206  		
00:05:37.206  		'
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:37.206  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:37.206  		--rc genhtml_branch_coverage=1
00:05:37.206  		--rc genhtml_function_coverage=1
00:05:37.206  		--rc genhtml_legend=1
00:05:37.206  		--rc geninfo_all_blocks=1
00:05:37.206  		--rc geninfo_unexecuted_blocks=1
00:05:37.206  		
00:05:37.206  		'
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:37.206  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:37.206  		--rc genhtml_branch_coverage=1
00:05:37.206  		--rc genhtml_function_coverage=1
00:05:37.206  		--rc genhtml_legend=1
00:05:37.206  		--rc geninfo_all_blocks=1
00:05:37.206  		--rc geninfo_unexecuted_blocks=1
00:05:37.206  		
00:05:37.206  		'
00:05:37.206    10:55:54 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:37.206  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:37.206  		--rc genhtml_branch_coverage=1
00:05:37.206  		--rc genhtml_function_coverage=1
00:05:37.206  		--rc genhtml_legend=1
00:05:37.206  		--rc geninfo_all_blocks=1
00:05:37.206  		--rc geninfo_unexecuted_blocks=1
00:05:37.206  		
00:05:37.206  		'
00:05:37.206   10:55:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:05:37.206   10:55:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=110297
00:05:37.206   10:55:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:05:37.206   10:55:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 110297
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 110297 ']'
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:37.206  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:37.206   10:55:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:37.466  [2024-12-09 10:55:54.293145] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:37.466  [2024-12-09 10:55:54.293244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110297 ]
00:05:37.466  [2024-12-09 10:55:54.401858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:37.725  [2024-12-09 10:55:54.499696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:38.293   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:38.293   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:05:38.293   10:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:05:38.293   10:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:05:38.293   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:38.293   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:38.293  {
00:05:38.293  "filename": "/tmp/spdk_mem_dump.txt"
00:05:38.293  }
00:05:38.293   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:38.293   10:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:05:38.293  DPDK memory size 824.000000 MiB in 1 heap(s)
00:05:38.293  1 heaps totaling size 824.000000 MiB
00:05:38.293    size:  824.000000 MiB heap id: 0
00:05:38.293  end heaps----------
00:05:38.293  9 mempools totaling size 603.782043 MiB
00:05:38.293    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:05:38.293    size:  158.602051 MiB name: PDU_data_out_Pool
00:05:38.293    size:  100.555481 MiB name: bdev_io_110297
00:05:38.293    size:   50.003479 MiB name: msgpool_110297
00:05:38.293    size:   36.509338 MiB name: fsdev_io_110297
00:05:38.293    size:   21.763794 MiB name: PDU_Pool
00:05:38.293    size:   19.513306 MiB name: SCSI_TASK_Pool
00:05:38.293    size:    4.133484 MiB name: evtpool_110297
00:05:38.293    size:    0.026123 MiB name: Session_Pool
00:05:38.293  end mempools-------
00:05:38.293  6 memzones totaling size 4.142822 MiB
00:05:38.293    size:    1.000366 MiB name: RG_ring_0_110297
00:05:38.293    size:    1.000366 MiB name: RG_ring_1_110297
00:05:38.293    size:    1.000366 MiB name: RG_ring_4_110297
00:05:38.293    size:    1.000366 MiB name: RG_ring_5_110297
00:05:38.293    size:    0.125366 MiB name: RG_ring_2_110297
00:05:38.293    size:    0.015991 MiB name: RG_ring_3_110297
00:05:38.293  end memzones-------
00:05:38.293   10:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:05:38.553  heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19
00:05:38.553    list of free elements. size: 16.847595 MiB
00:05:38.553      element at address: 0x200006400000 with size:    1.995972 MiB
00:05:38.553      element at address: 0x20000a600000 with size:    1.995972 MiB
00:05:38.553      element at address: 0x200003e00000 with size:    1.991028 MiB
00:05:38.553      element at address: 0x200019500040 with size:    0.999939 MiB
00:05:38.553      element at address: 0x200019900040 with size:    0.999939 MiB
00:05:38.553      element at address: 0x200019a00000 with size:    0.999329 MiB
00:05:38.553      element at address: 0x200000400000 with size:    0.998108 MiB
00:05:38.553      element at address: 0x200032600000 with size:    0.994324 MiB
00:05:38.553      element at address: 0x200019200000 with size:    0.959900 MiB
00:05:38.553      element at address: 0x200019d00040 with size:    0.937256 MiB
00:05:38.553      element at address: 0x200000200000 with size:    0.716980 MiB
00:05:38.553      element at address: 0x20001b400000 with size:    0.583191 MiB
00:05:38.553      element at address: 0x200000c00000 with size:    0.495300 MiB
00:05:38.553      element at address: 0x200019600000 with size:    0.491150 MiB
00:05:38.553      element at address: 0x200019e00000 with size:    0.485657 MiB
00:05:38.553      element at address: 0x200012c00000 with size:    0.436157 MiB
00:05:38.553      element at address: 0x200028800000 with size:    0.411072 MiB
00:05:38.553      element at address: 0x200000800000 with size:    0.355286 MiB
00:05:38.553      element at address: 0x20000a5ff040 with size:    0.001038 MiB
00:05:38.553    list of standard malloc elements. size: 199.221497 MiB
00:05:38.553      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:05:38.553      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:05:38.553      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:05:38.553      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:05:38.553      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:05:38.553      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:05:38.553      element at address: 0x200019deff40 with size:    0.062683 MiB
00:05:38.553      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:05:38.553      element at address: 0x200012bff040 with size:    0.000427 MiB
00:05:38.553      element at address: 0x200012bffa00 with size:    0.000366 MiB
00:05:38.553      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000004ffa40 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:05:38.553      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200000cff000 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ff480 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ff580 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ff680 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ff780 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ff880 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ff980 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:05:38.553      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff200 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff300 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff400 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff500 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff600 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff700 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff800 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bff900 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:05:38.553      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:05:38.553    list of memzone associated elements. size: 607.930908 MiB
00:05:38.553      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:05:38.553        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:05:38.553      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:05:38.553        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:05:38.553      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:05:38.553        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_110297_0
00:05:38.553      element at address: 0x200000dff340 with size:   48.003113 MiB
00:05:38.553        associated memzone info: size:   48.002930 MiB name: MP_msgpool_110297_0
00:05:38.553      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:05:38.553        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_110297_0
00:05:38.553      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:05:38.553        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:05:38.553      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:05:38.553        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:05:38.553      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:05:38.553        associated memzone info: size:    3.000122 MiB name: MP_evtpool_110297_0
00:05:38.553      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:05:38.553        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_110297
00:05:38.553      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:05:38.553        associated memzone info: size:    1.007996 MiB name: MP_evtpool_110297
00:05:38.553      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:05:38.553        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:05:38.553      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:05:38.553        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:05:38.553      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:05:38.553        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:05:38.553      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:05:38.553        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:05:38.553      element at address: 0x200000cff100 with size:    1.000549 MiB
00:05:38.553        associated memzone info: size:    1.000366 MiB name: RG_ring_0_110297
00:05:38.553      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:05:38.553        associated memzone info: size:    1.000366 MiB name: RG_ring_1_110297
00:05:38.553      element at address: 0x200019affd40 with size:    1.000549 MiB
00:05:38.553        associated memzone info: size:    1.000366 MiB name: RG_ring_4_110297
00:05:38.553      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:05:38.553        associated memzone info: size:    1.000366 MiB name: RG_ring_5_110297
00:05:38.553      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:05:38.553        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_110297
00:05:38.553      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:05:38.553        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_110297
00:05:38.553      element at address: 0x20001967dbc0 with size:    0.500549 MiB
00:05:38.553        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:05:38.553      element at address: 0x200012c6fa80 with size:    0.500549 MiB
00:05:38.553        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:05:38.553      element at address: 0x200019e7c540 with size:    0.250549 MiB
00:05:38.553        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:05:38.553      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:05:38.553        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_110297
00:05:38.553      element at address: 0x20000085f180 with size:    0.125549 MiB
00:05:38.553        associated memzone info: size:    0.125366 MiB name: RG_ring_2_110297
00:05:38.553      element at address: 0x2000192f5bc0 with size:    0.031799 MiB
00:05:38.553        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:05:38.553      element at address: 0x2000288693c0 with size:    0.023804 MiB
00:05:38.553        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:05:38.553      element at address: 0x20000085af40 with size:    0.016174 MiB
00:05:38.553        associated memzone info: size:    0.015991 MiB name: RG_ring_3_110297
00:05:38.553      element at address: 0x20002886f540 with size:    0.002502 MiB
00:05:38.553        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:05:38.553      element at address: 0x2000004ffb40 with size:    0.000366 MiB
00:05:38.554        associated memzone info: size:    0.000183 MiB name: MP_msgpool_110297
00:05:38.554      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:05:38.554        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_110297
00:05:38.554      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:05:38.554        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_110297
00:05:38.554      element at address: 0x20000a5ffa80 with size:    0.000366 MiB
00:05:38.554        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:05:38.554   10:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:05:38.554   10:55:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 110297
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 110297 ']'
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 110297
00:05:38.554    10:55:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:38.554    10:55:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110297
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110297'
00:05:38.554  killing process with pid 110297
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 110297
00:05:38.554   10:55:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 110297
00:05:40.462  
00:05:40.462  real	0m3.158s
00:05:40.462  user	0m3.133s
00:05:40.462  sys	0m0.548s
00:05:40.462   10:55:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:40.462   10:55:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:40.462  ************************************
00:05:40.462  END TEST dpdk_mem_utility
00:05:40.462  ************************************
00:05:40.462   10:55:57  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:05:40.462   10:55:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:40.462   10:55:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:40.462   10:55:57  -- common/autotest_common.sh@10 -- # set +x
00:05:40.462  ************************************
00:05:40.462  START TEST event
00:05:40.462  ************************************
00:05:40.462   10:55:57 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh
00:05:40.462  * Looking for test storage...
00:05:40.462  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:05:40.462    10:55:57 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:40.462     10:55:57 event -- common/autotest_common.sh@1711 -- # lcov --version
00:05:40.462     10:55:57 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:40.462    10:55:57 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:40.462    10:55:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:40.462    10:55:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:40.462    10:55:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:40.462    10:55:57 event -- scripts/common.sh@336 -- # IFS=.-:
00:05:40.462    10:55:57 event -- scripts/common.sh@336 -- # read -ra ver1
00:05:40.462    10:55:57 event -- scripts/common.sh@337 -- # IFS=.-:
00:05:40.462    10:55:57 event -- scripts/common.sh@337 -- # read -ra ver2
00:05:40.462    10:55:57 event -- scripts/common.sh@338 -- # local 'op=<'
00:05:40.462    10:55:57 event -- scripts/common.sh@340 -- # ver1_l=2
00:05:40.462    10:55:57 event -- scripts/common.sh@341 -- # ver2_l=1
00:05:40.462    10:55:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:40.462    10:55:57 event -- scripts/common.sh@344 -- # case "$op" in
00:05:40.462    10:55:57 event -- scripts/common.sh@345 -- # : 1
00:05:40.462    10:55:57 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:40.462    10:55:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:40.462     10:55:57 event -- scripts/common.sh@365 -- # decimal 1
00:05:40.462     10:55:57 event -- scripts/common.sh@353 -- # local d=1
00:05:40.462     10:55:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:40.462     10:55:57 event -- scripts/common.sh@355 -- # echo 1
00:05:40.462    10:55:57 event -- scripts/common.sh@365 -- # ver1[v]=1
00:05:40.462     10:55:57 event -- scripts/common.sh@366 -- # decimal 2
00:05:40.462     10:55:57 event -- scripts/common.sh@353 -- # local d=2
00:05:40.462     10:55:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:40.462     10:55:57 event -- scripts/common.sh@355 -- # echo 2
00:05:40.462    10:55:57 event -- scripts/common.sh@366 -- # ver2[v]=2
00:05:40.462    10:55:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:40.462    10:55:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:40.462    10:55:57 event -- scripts/common.sh@368 -- # return 0
00:05:40.462    10:55:57 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:40.462    10:55:57 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:40.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.462  		--rc genhtml_branch_coverage=1
00:05:40.462  		--rc genhtml_function_coverage=1
00:05:40.462  		--rc genhtml_legend=1
00:05:40.462  		--rc geninfo_all_blocks=1
00:05:40.462  		--rc geninfo_unexecuted_blocks=1
00:05:40.462  		
00:05:40.462  		'
00:05:40.462    10:55:57 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:40.462  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.462  		--rc genhtml_branch_coverage=1
00:05:40.462  		--rc genhtml_function_coverage=1
00:05:40.463  		--rc genhtml_legend=1
00:05:40.463  		--rc geninfo_all_blocks=1
00:05:40.463  		--rc geninfo_unexecuted_blocks=1
00:05:40.463  		
00:05:40.463  		'
00:05:40.463    10:55:57 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:40.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.463  		--rc genhtml_branch_coverage=1
00:05:40.463  		--rc genhtml_function_coverage=1
00:05:40.463  		--rc genhtml_legend=1
00:05:40.463  		--rc geninfo_all_blocks=1
00:05:40.463  		--rc geninfo_unexecuted_blocks=1
00:05:40.463  		
00:05:40.463  		'
00:05:40.463    10:55:57 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:40.463  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:40.463  		--rc genhtml_branch_coverage=1
00:05:40.463  		--rc genhtml_function_coverage=1
00:05:40.463  		--rc genhtml_legend=1
00:05:40.463  		--rc geninfo_all_blocks=1
00:05:40.463  		--rc geninfo_unexecuted_blocks=1
00:05:40.463  		
00:05:40.463  		'
00:05:40.463   10:55:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/bdev/nbd_common.sh
00:05:40.463    10:55:57 event -- bdev/nbd_common.sh@6 -- # set -e
00:05:40.463   10:55:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:40.463   10:55:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:05:40.463   10:55:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:40.463   10:55:57 event -- common/autotest_common.sh@10 -- # set +x
00:05:40.463  ************************************
00:05:40.463  START TEST event_perf
00:05:40.463  ************************************
00:05:40.463   10:55:57 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:40.463  Running I/O for 1 seconds...[2024-12-09 10:55:57.439641] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:40.463  [2024-12-09 10:55:57.439725] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110984 ]
00:05:40.723  [2024-12-09 10:55:57.550862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:40.723  [2024-12-09 10:55:57.652153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:40.723  [2024-12-09 10:55:57.652228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:40.723  [2024-12-09 10:55:57.652271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:40.723  [2024-12-09 10:55:57.652291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:42.100  Running I/O for 1 seconds...
00:05:42.100  lcore  0:   193212
00:05:42.100  lcore  1:   193212
00:05:42.100  lcore  2:   193211
00:05:42.100  lcore  3:   193211
00:05:42.100  done.
00:05:42.100  
00:05:42.100  real	0m1.520s
00:05:42.100  user	0m4.364s
00:05:42.100  sys	0m0.147s
00:05:42.100   10:55:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:42.100   10:55:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:05:42.100  ************************************
00:05:42.100  END TEST event_perf
00:05:42.100  ************************************
00:05:42.100   10:55:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:05:42.100   10:55:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:42.100   10:55:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:42.100   10:55:58 event -- common/autotest_common.sh@10 -- # set +x
00:05:42.100  ************************************
00:05:42.100  START TEST event_reactor
00:05:42.100  ************************************
00:05:42.100   10:55:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:05:42.100  [2024-12-09 10:55:59.008470] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:42.100  [2024-12-09 10:55:59.008565] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111214 ]
00:05:42.360  [2024-12-09 10:55:59.126249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:42.360  [2024-12-09 10:55:59.222337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:43.740  test_start
00:05:43.740  oneshot
00:05:43.740  tick 100
00:05:43.740  tick 100
00:05:43.740  tick 250
00:05:43.740  tick 100
00:05:43.740  tick 100
00:05:43.740  tick 100
00:05:43.740  tick 250
00:05:43.740  tick 500
00:05:43.740  tick 100
00:05:43.740  tick 100
00:05:43.740  tick 250
00:05:43.740  tick 100
00:05:43.740  tick 100
00:05:43.740  test_end
00:05:43.740  
00:05:43.740  real	0m1.520s
00:05:43.740  user	0m1.385s
00:05:43.740  sys	0m0.128s
00:05:43.740   10:56:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:43.740   10:56:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:05:43.740  ************************************
00:05:43.740  END TEST event_reactor
00:05:43.740  ************************************
00:05:43.740   10:56:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:43.740   10:56:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:43.740   10:56:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:43.740   10:56:00 event -- common/autotest_common.sh@10 -- # set +x
00:05:43.740  ************************************
00:05:43.740  START TEST event_reactor_perf
00:05:43.740  ************************************
00:05:43.740   10:56:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:43.740  [2024-12-09 10:56:00.580739] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:43.740  [2024-12-09 10:56:00.580885] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111447 ]
00:05:43.740  [2024-12-09 10:56:00.710357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:44.000  [2024-12-09 10:56:00.826550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:45.380  test_start
00:05:45.380  test_end
00:05:45.380  Performance:   363787 events per second
00:05:45.380  
00:05:45.380  real	0m1.540s
00:05:45.380  user	0m1.407s
00:05:45.380  sys	0m0.124s
00:05:45.380   10:56:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:45.380   10:56:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:05:45.380  ************************************
00:05:45.380  END TEST event_reactor_perf
00:05:45.380  ************************************
00:05:45.380    10:56:02 event -- event/event.sh@49 -- # uname -s
00:05:45.380   10:56:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:05:45.380   10:56:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:05:45.380   10:56:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:45.380   10:56:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:45.380   10:56:02 event -- common/autotest_common.sh@10 -- # set +x
00:05:45.380  ************************************
00:05:45.380  START TEST event_scheduler
00:05:45.380  ************************************
00:05:45.380   10:56:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:05:45.380  * Looking for test storage...
00:05:45.380  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler
00:05:45.380    10:56:02 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:45.380     10:56:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:05:45.380     10:56:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:45.380    10:56:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:45.380     10:56:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:45.380    10:56:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:45.381    10:56:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:05:45.381    10:56:02 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:45.381    10:56:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:45.381  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.381  		--rc genhtml_branch_coverage=1
00:05:45.381  		--rc genhtml_function_coverage=1
00:05:45.381  		--rc genhtml_legend=1
00:05:45.381  		--rc geninfo_all_blocks=1
00:05:45.381  		--rc geninfo_unexecuted_blocks=1
00:05:45.381  		
00:05:45.381  		'
00:05:45.381    10:56:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:45.381  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.381  		--rc genhtml_branch_coverage=1
00:05:45.381  		--rc genhtml_function_coverage=1
00:05:45.381  		--rc genhtml_legend=1
00:05:45.381  		--rc geninfo_all_blocks=1
00:05:45.381  		--rc geninfo_unexecuted_blocks=1
00:05:45.381  		
00:05:45.381  		'
00:05:45.381    10:56:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:45.381  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.381  		--rc genhtml_branch_coverage=1
00:05:45.381  		--rc genhtml_function_coverage=1
00:05:45.381  		--rc genhtml_legend=1
00:05:45.381  		--rc geninfo_all_blocks=1
00:05:45.381  		--rc geninfo_unexecuted_blocks=1
00:05:45.381  		
00:05:45.381  		'
00:05:45.381    10:56:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:45.381  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:45.381  		--rc genhtml_branch_coverage=1
00:05:45.381  		--rc genhtml_function_coverage=1
00:05:45.381  		--rc genhtml_legend=1
00:05:45.381  		--rc geninfo_all_blocks=1
00:05:45.381  		--rc geninfo_unexecuted_blocks=1
00:05:45.381  		
00:05:45.381  		'
00:05:45.381   10:56:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:05:45.381   10:56:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=111903
00:05:45.381   10:56:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:05:45.381   10:56:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:05:45.381   10:56:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 111903
00:05:45.381   10:56:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 111903 ']'
00:05:45.381   10:56:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:45.381   10:56:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:45.381   10:56:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:45.381  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:45.381   10:56:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:45.381   10:56:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:45.381  [2024-12-09 10:56:02.322424] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:45.381  [2024-12-09 10:56:02.322547] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111903 ]
00:05:45.640  [2024-12-09 10:56:02.433641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:45.640  [2024-12-09 10:56:02.542370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:45.640  [2024-12-09 10:56:02.542444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:45.640  [2024-12-09 10:56:02.542505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:45.640  [2024-12-09 10:56:02.542485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:05:46.209   10:56:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:46.209  [2024-12-09 10:56:03.213395] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:05:46.209  [2024-12-09 10:56:03.213439] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:05:46.209  [2024-12-09 10:56:03.213490] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:05:46.209  [2024-12-09 10:56:03.213507] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:05:46.209  [2024-12-09 10:56:03.213521] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.209   10:56:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.209   10:56:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  [2024-12-09 10:56:03.491132] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:05:46.780   10:56:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:05:46.780   10:56:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:46.780   10:56:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  ************************************
00:05:46.780  START TEST scheduler_create_thread
00:05:46.780  ************************************
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  2
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  3
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  4
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  5
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  6
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  7
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  8
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  9
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780  10
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:46.780   10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:46.780    10:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:48.159    10:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:48.159   10:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:05:48.159   10:56:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:05:48.159   10:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:48.159   10:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:49.537   10:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:49.537  
00:05:49.537  real	0m2.621s
00:05:49.537  user	0m0.020s
00:05:49.537  sys	0m0.005s
00:05:49.537   10:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:49.537   10:56:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:05:49.537  ************************************
00:05:49.537  END TEST scheduler_create_thread
00:05:49.537  ************************************
00:05:49.537   10:56:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:05:49.537   10:56:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 111903
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 111903 ']'
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 111903
00:05:49.537    10:56:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:05:49.537    10:56:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111903
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111903'
00:05:49.537  killing process with pid 111903
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 111903
00:05:49.537   10:56:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 111903
00:05:49.797  [2024-12-09 10:56:06.621995] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:05:50.740  
00:05:50.740  real	0m5.476s
00:05:50.740  user	0m9.885s
00:05:50.740  sys	0m0.470s
00:05:50.740   10:56:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:50.740   10:56:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:05:50.740  ************************************
00:05:50.740  END TEST event_scheduler
00:05:50.740  ************************************
00:05:50.740   10:56:07 event -- event/event.sh@51 -- # modprobe -n nbd
00:05:50.740   10:56:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:05:50.740   10:56:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:50.740   10:56:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:50.740   10:56:07 event -- common/autotest_common.sh@10 -- # set +x
00:05:50.740  ************************************
00:05:50.740  START TEST app_repeat
00:05:50.740  ************************************
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112797
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112797'
00:05:50.740  Process app_repeat pid: 112797
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:05:50.740  spdk_app_start Round 0
00:05:50.740   10:56:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112797 /var/tmp/spdk-nbd.sock
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112797 ']'
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:50.740  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:50.740   10:56:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:50.740  [2024-12-09 10:56:07.698911] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:05:50.740  [2024-12-09 10:56:07.699005] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112797 ]
00:05:51.000  [2024-12-09 10:56:07.814887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:51.000  [2024-12-09 10:56:07.911544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:51.000  [2024-12-09 10:56:07.911564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:51.937   10:56:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:51.937   10:56:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:51.937   10:56:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:51.937  Malloc0
00:05:51.937   10:56:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:52.197  Malloc1
00:05:52.197   10:56:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:52.197   10:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:52.457  /dev/nbd0
00:05:52.457    10:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:52.457   10:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:52.457  1+0 records in
00:05:52.457  1+0 records out
00:05:52.457  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244904 s, 16.7 MB/s
00:05:52.457    10:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:52.457   10:56:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:52.457   10:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:52.457   10:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:52.457   10:56:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:52.716  /dev/nbd1
00:05:52.716    10:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:52.716   10:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:52.716  1+0 records in
00:05:52.716  1+0 records out
00:05:52.716  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014347 s, 28.5 MB/s
00:05:52.716    10:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:52.716   10:56:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:52.716   10:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:52.716   10:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:52.716    10:56:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:52.716    10:56:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:52.716     10:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:52.976    10:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:52.976    {
00:05:52.976      "nbd_device": "/dev/nbd0",
00:05:52.976      "bdev_name": "Malloc0"
00:05:52.976    },
00:05:52.976    {
00:05:52.976      "nbd_device": "/dev/nbd1",
00:05:52.976      "bdev_name": "Malloc1"
00:05:52.976    }
00:05:52.976  ]'
00:05:52.976     10:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:05:52.976    {
00:05:52.976      "nbd_device": "/dev/nbd0",
00:05:52.976      "bdev_name": "Malloc0"
00:05:52.976    },
00:05:52.976    {
00:05:52.976      "nbd_device": "/dev/nbd1",
00:05:52.976      "bdev_name": "Malloc1"
00:05:52.976    }
00:05:52.976  ]'
00:05:52.976     10:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:52.976    10:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:52.976  /dev/nbd1'
00:05:52.976     10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:52.976     10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:52.976  /dev/nbd1'
00:05:52.976    10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:05:52.976    10:56:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:52.976  256+0 records in
00:05:52.976  256+0 records out
00:05:52.976  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00375313 s, 279 MB/s
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:52.976   10:56:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:53.235  256+0 records in
00:05:53.235  256+0 records out
00:05:53.235  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217105 s, 48.3 MB/s
00:05:53.235   10:56:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:53.235   10:56:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:53.235  256+0 records in
00:05:53.235  256+0 records out
00:05:53.235  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264877 s, 39.6 MB/s
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:53.235   10:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:53.496    10:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:53.496   10:56:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:53.754    10:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:53.755   10:56:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:53.755    10:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:53.755    10:56:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:53.755     10:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:54.014    10:56:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:54.014     10:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:54.014     10:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:54.014    10:56:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:54.014     10:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:05:54.014     10:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:54.014     10:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:05:54.014    10:56:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:05:54.014    10:56:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:05:54.014   10:56:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:05:54.014   10:56:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:54.014   10:56:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:05:54.014   10:56:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:54.273   10:56:11 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:05:55.211  [2024-12-09 10:56:12.191579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:55.471  [2024-12-09 10:56:12.285300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:55.471  [2024-12-09 10:56:12.285302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:55.471  [2024-12-09 10:56:12.454190] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:05:55.471  [2024-12-09 10:56:12.454278] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:05:57.378   10:56:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:05:57.378   10:56:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:05:57.378  spdk_app_start Round 1
00:05:57.378   10:56:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112797 /var/tmp/spdk-nbd.sock
00:05:57.378   10:56:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112797 ']'
00:05:57.378   10:56:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:05:57.378   10:56:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:57.378   10:56:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:05:57.378  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:05:57.378   10:56:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:57.378   10:56:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:05:57.638   10:56:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:57.638   10:56:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:05:57.638   10:56:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:57.638  Malloc0
00:05:57.898   10:56:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:05:58.161  Malloc1
00:05:58.161   10:56:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:58.161   10:56:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:05:58.430  /dev/nbd0
00:05:58.430    10:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:05:58.430   10:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:58.430  1+0 records in
00:05:58.430  1+0 records out
00:05:58.430  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187678 s, 21.8 MB/s
00:05:58.430    10:56:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:58.430   10:56:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:58.430   10:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:58.430   10:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:58.430   10:56:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:05:58.690  /dev/nbd1
00:05:58.690    10:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:05:58.690   10:56:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:05:58.690  1+0 records in
00:05:58.690  1+0 records out
00:05:58.690  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148111 s, 27.7 MB/s
00:05:58.690    10:56:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:05:58.690   10:56:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:05:58.690   10:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:05:58.690   10:56:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:05:58.690    10:56:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:58.690    10:56:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:58.690     10:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:58.690    10:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:05:58.690    {
00:05:58.690      "nbd_device": "/dev/nbd0",
00:05:58.690      "bdev_name": "Malloc0"
00:05:58.690    },
00:05:58.690    {
00:05:58.690      "nbd_device": "/dev/nbd1",
00:05:58.690      "bdev_name": "Malloc1"
00:05:58.690    }
00:05:58.690  ]'
00:05:58.690     10:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:05:58.690    {
00:05:58.690      "nbd_device": "/dev/nbd0",
00:05:58.690      "bdev_name": "Malloc0"
00:05:58.690    },
00:05:58.691    {
00:05:58.691      "nbd_device": "/dev/nbd1",
00:05:58.691      "bdev_name": "Malloc1"
00:05:58.691    }
00:05:58.691  ]'
00:05:58.691     10:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:58.950    10:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:05:58.950  /dev/nbd1'
00:05:58.950     10:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:58.950     10:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:05:58.950  /dev/nbd1'
00:05:58.950    10:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:05:58.950    10:56:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:05:58.950  256+0 records in
00:05:58.950  256+0 records out
00:05:58.950  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363123 s, 289 MB/s
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:05:58.950  256+0 records in
00:05:58.950  256+0 records out
00:05:58.950  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206769 s, 50.7 MB/s
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:05:58.950  256+0 records in
00:05:58.950  256+0 records out
00:05:58.950  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236441 s, 44.3 MB/s
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:58.950   10:56:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:05:59.210    10:56:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:05:59.210   10:56:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:05:59.468    10:56:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:05:59.468   10:56:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:05:59.468    10:56:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:05:59.468    10:56:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:05:59.468     10:56:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:05:59.727    10:56:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:05:59.727     10:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:05:59.727     10:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:05:59.727    10:56:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:05:59.727     10:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:05:59.727     10:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:05:59.727     10:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:05:59.727    10:56:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:05:59.727    10:56:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:05:59.727   10:56:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:05:59.727   10:56:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:05:59.727   10:56:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:05:59.727   10:56:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:05:59.987   10:56:16 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:00.924  [2024-12-09 10:56:17.842444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:01.184  [2024-12-09 10:56:17.943011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:01.184  [2024-12-09 10:56:17.943026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:01.184  [2024-12-09 10:56:18.112631] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:01.184  [2024-12-09 10:56:18.112717] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:03.091   10:56:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:06:03.091   10:56:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:06:03.091  spdk_app_start Round 2
00:06:03.091   10:56:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112797 /var/tmp/spdk-nbd.sock
00:06:03.091   10:56:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112797 ']'
00:06:03.091   10:56:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:03.091   10:56:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:03.091   10:56:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:03.091  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:03.091   10:56:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:03.091   10:56:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:03.351   10:56:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:03.351   10:56:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:03.351   10:56:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:03.610  Malloc0
00:06:03.610   10:56:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:03.870  Malloc1
00:06:03.870   10:56:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:03.870   10:56:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:04.129  /dev/nbd0
00:06:04.129    10:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:04.129   10:56:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:04.129  1+0 records in
00:06:04.129  1+0 records out
00:06:04.129  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210666 s, 19.4 MB/s
00:06:04.129    10:56:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:04.129   10:56:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:04.129   10:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:04.129   10:56:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:04.129   10:56:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:04.389  /dev/nbd1
00:06:04.389    10:56:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:04.389   10:56:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:04.389  1+0 records in
00:06:04.389  1+0 records out
00:06:04.389  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213782 s, 19.2 MB/s
00:06:04.389    10:56:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:06:04.389   10:56:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:06:04.389   10:56:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:04.389   10:56:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:04.389    10:56:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:04.389    10:56:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:04.389     10:56:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:04.648    10:56:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:04.648    {
00:06:04.648      "nbd_device": "/dev/nbd0",
00:06:04.648      "bdev_name": "Malloc0"
00:06:04.648    },
00:06:04.648    {
00:06:04.648      "nbd_device": "/dev/nbd1",
00:06:04.648      "bdev_name": "Malloc1"
00:06:04.648    }
00:06:04.648  ]'
00:06:04.648     10:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:06:04.649    {
00:06:04.649      "nbd_device": "/dev/nbd0",
00:06:04.649      "bdev_name": "Malloc0"
00:06:04.649    },
00:06:04.649    {
00:06:04.649      "nbd_device": "/dev/nbd1",
00:06:04.649      "bdev_name": "Malloc1"
00:06:04.649    }
00:06:04.649  ]'
00:06:04.649     10:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:04.649    10:56:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:04.649  /dev/nbd1'
00:06:04.649     10:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:04.649     10:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:04.649  /dev/nbd1'
00:06:04.649    10:56:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:06:04.649    10:56:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:04.649  256+0 records in
00:06:04.649  256+0 records out
00:06:04.649  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00370857 s, 283 MB/s
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:04.649  256+0 records in
00:06:04.649  256+0 records out
00:06:04.649  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206281 s, 50.8 MB/s
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:04.649  256+0 records in
00:06:04.649  256+0 records out
00:06:04.649  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242911 s, 43.2 MB/s
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:04.649   10:56:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:04.908    10:56:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:04.908   10:56:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:05.167    10:56:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:06:05.167   10:56:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:06:05.167    10:56:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:05.167    10:56:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:05.167     10:56:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:05.425    10:56:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:05.425     10:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:05.425     10:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:05.426    10:56:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:05.426     10:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:06:05.426     10:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:05.426     10:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:06:05.426    10:56:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:06:05.426    10:56:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:06:05.426   10:56:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:06:05.426   10:56:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:05.426   10:56:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:06:05.426   10:56:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:05.994   10:56:22 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:06:06.931  [2024-12-09 10:56:23.663579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:06.931  [2024-12-09 10:56:23.753745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:06.931  [2024-12-09 10:56:23.753755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:06.931  [2024-12-09 10:56:23.923480] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:06.931  [2024-12-09 10:56:23.923572] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:08.833   10:56:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112797 /var/tmp/spdk-nbd.sock
00:06:08.833   10:56:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112797 ']'
00:06:08.833   10:56:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:08.833   10:56:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:08.833   10:56:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:08.833  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:08.833   10:56:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:08.833   10:56:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:09.092   10:56:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:09.092   10:56:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:06:09.093   10:56:25 event.app_repeat -- event/event.sh@39 -- # killprocess 112797
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 112797 ']'
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 112797
00:06:09.093    10:56:25 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:09.093    10:56:25 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112797
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112797'
00:06:09.093  killing process with pid 112797
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@973 -- # kill 112797
00:06:09.093   10:56:25 event.app_repeat -- common/autotest_common.sh@978 -- # wait 112797
00:06:10.030  spdk_app_start is called in Round 0.
00:06:10.030  Shutdown signal received, stop current app iteration
00:06:10.031  Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 reinitialization...
00:06:10.031  spdk_app_start is called in Round 1.
00:06:10.031  Shutdown signal received, stop current app iteration
00:06:10.031  Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 reinitialization...
00:06:10.031  spdk_app_start is called in Round 2.
00:06:10.031  Shutdown signal received, stop current app iteration
00:06:10.031  Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 reinitialization...
00:06:10.031  spdk_app_start is called in Round 3.
00:06:10.031  Shutdown signal received, stop current app iteration
00:06:10.031   10:56:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:06:10.031   10:56:26 event.app_repeat -- event/event.sh@42 -- # return 0
00:06:10.031  
00:06:10.031  real	0m19.157s
00:06:10.031  user	0m40.614s
00:06:10.031  sys	0m2.704s
00:06:10.031   10:56:26 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:10.031   10:56:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:06:10.031  ************************************
00:06:10.031  END TEST app_repeat
00:06:10.031  ************************************
00:06:10.031   10:56:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:06:10.031   10:56:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:10.031   10:56:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:10.031   10:56:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:10.031   10:56:26 event -- common/autotest_common.sh@10 -- # set +x
00:06:10.031  ************************************
00:06:10.031  START TEST cpu_locks
00:06:10.031  ************************************
00:06:10.031   10:56:26 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:10.031  * Looking for test storage...
00:06:10.031  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:10.031     10:56:26 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:06:10.031     10:56:26 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:10.031     10:56:26 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:10.031    10:56:26 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:10.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.031  		--rc genhtml_branch_coverage=1
00:06:10.031  		--rc genhtml_function_coverage=1
00:06:10.031  		--rc genhtml_legend=1
00:06:10.031  		--rc geninfo_all_blocks=1
00:06:10.031  		--rc geninfo_unexecuted_blocks=1
00:06:10.031  		
00:06:10.031  		'
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:10.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.031  		--rc genhtml_branch_coverage=1
00:06:10.031  		--rc genhtml_function_coverage=1
00:06:10.031  		--rc genhtml_legend=1
00:06:10.031  		--rc geninfo_all_blocks=1
00:06:10.031  		--rc geninfo_unexecuted_blocks=1
00:06:10.031  		
00:06:10.031  		'
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:10.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.031  		--rc genhtml_branch_coverage=1
00:06:10.031  		--rc genhtml_function_coverage=1
00:06:10.031  		--rc genhtml_legend=1
00:06:10.031  		--rc geninfo_all_blocks=1
00:06:10.031  		--rc geninfo_unexecuted_blocks=1
00:06:10.031  		
00:06:10.031  		'
00:06:10.031    10:56:26 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:10.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.031  		--rc genhtml_branch_coverage=1
00:06:10.031  		--rc genhtml_function_coverage=1
00:06:10.031  		--rc genhtml_legend=1
00:06:10.031  		--rc geninfo_all_blocks=1
00:06:10.031  		--rc geninfo_unexecuted_blocks=1
00:06:10.031  		
00:06:10.031  		'
00:06:10.031   10:56:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:06:10.031   10:56:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:06:10.031   10:56:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:06:10.031   10:56:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:06:10.031   10:56:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:10.031   10:56:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:10.031   10:56:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:10.031  ************************************
00:06:10.031  START TEST default_locks
00:06:10.031  ************************************
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116516
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 116516
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 116516 ']'
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:10.031  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:10.031   10:56:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:10.291  [2024-12-09 10:56:27.081281] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:10.291  [2024-12-09 10:56:27.081386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116516 ]
00:06:10.291  [2024-12-09 10:56:27.196906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:10.291  [2024-12-09 10:56:27.293627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:11.229   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:11.229   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:06:11.229   10:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 116516
00:06:11.229   10:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 116516
00:06:11.229   10:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:11.488  lslocks: write error
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 116516
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 116516 ']'
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 116516
00:06:11.488    10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:11.488    10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116516
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116516'
00:06:11.488  killing process with pid 116516
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 116516
00:06:11.488   10:56:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 116516
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 116516
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 116516
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:13.411    10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 116516
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 116516 ']'
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:13.411  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:13.411  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (116516) - No such process
00:06:13.411  ERROR: process (pid: 116516) is no longer running
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:13.411  
00:06:13.411  real	0m3.244s
00:06:13.411  user	0m3.162s
00:06:13.411  sys	0m0.667s
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:13.411   10:56:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:06:13.411  ************************************
00:06:13.411  END TEST default_locks
00:06:13.411  ************************************
00:06:13.411   10:56:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:06:13.411   10:56:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:13.411   10:56:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:13.411   10:56:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:13.411  ************************************
00:06:13.411  START TEST default_locks_via_rpc
00:06:13.411  ************************************
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=117158
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 117158
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 117158 ']'
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:13.411  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:13.411   10:56:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:13.411  [2024-12-09 10:56:30.392603] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:13.411  [2024-12-09 10:56:30.392722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117158 ]
00:06:13.671  [2024-12-09 10:56:30.511366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:13.671  [2024-12-09 10:56:30.613848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 117158
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 117158
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 117158
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 117158 ']'
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 117158
00:06:14.607    10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:06:14.607   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:14.607    10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117158
00:06:14.867   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:14.867   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:14.867   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117158'
00:06:14.867  killing process with pid 117158
00:06:14.867   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 117158
00:06:14.867   10:56:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 117158
00:06:16.770  
00:06:16.770  real	0m3.257s
00:06:16.770  user	0m3.205s
00:06:16.770  sys	0m0.669s
00:06:16.770   10:56:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:16.770   10:56:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:16.770  ************************************
00:06:16.770  END TEST default_locks_via_rpc
00:06:16.770  ************************************
00:06:16.770   10:56:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:06:16.770   10:56:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:16.770   10:56:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:16.770   10:56:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:16.770  ************************************
00:06:16.770  START TEST non_locking_app_on_locked_coremask
00:06:16.770  ************************************
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=117662
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 117662 /var/tmp/spdk.sock
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117662 ']'
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:16.770  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:16.770   10:56:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:16.770  [2024-12-09 10:56:33.691201] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:16.770  [2024-12-09 10:56:33.691329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117662 ]
00:06:17.029  [2024-12-09 10:56:33.802572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:17.029  [2024-12-09 10:56:33.899530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=117822
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 117822 /var/tmp/spdk2.sock
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117822 ']'
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:17.968  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:17.968   10:56:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:17.968  [2024-12-09 10:56:34.721755] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:17.968  [2024-12-09 10:56:34.721904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117822 ]
00:06:17.968  [2024-12-09 10:56:34.884206] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:17.968  [2024-12-09 10:56:34.884257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:18.229  [2024-12-09 10:56:35.084660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:19.608   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:19.608   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:19.608   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 117662
00:06:19.608   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117662
00:06:19.608   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:20.176  lslocks: write error
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 117662
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117662 ']'
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 117662
00:06:20.176    10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:20.176    10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117662
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117662'
00:06:20.176  killing process with pid 117662
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 117662
00:06:20.176   10:56:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 117662
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 117822
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117822 ']'
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 117822
00:06:24.370    10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:24.370    10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117822
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117822'
00:06:24.370  killing process with pid 117822
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 117822
00:06:24.370   10:56:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 117822
00:06:25.749  
00:06:25.749  real	0m9.139s
00:06:25.749  user	0m9.211s
00:06:25.749  sys	0m1.297s
00:06:25.749   10:56:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:25.749   10:56:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:25.749  ************************************
00:06:25.749  END TEST non_locking_app_on_locked_coremask
00:06:25.749  ************************************
00:06:25.749   10:56:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:06:25.749   10:56:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:25.749   10:56:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:25.749   10:56:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:26.009  ************************************
00:06:26.009  START TEST locking_app_on_unlocked_coremask
00:06:26.009  ************************************
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=119294
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 119294 /var/tmp/spdk.sock
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 119294 ']'
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:26.009  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:26.009   10:56:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:26.009  [2024-12-09 10:56:42.878289] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:26.009  [2024-12-09 10:56:42.878420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119294 ]
00:06:26.009  [2024-12-09 10:56:43.000312] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:26.009  [2024-12-09 10:56:43.000358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:26.268  [2024-12-09 10:56:43.112207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=119502
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 119502 /var/tmp/spdk2.sock
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 119502 ']'
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:26.836  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:26.836   10:56:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:27.095  [2024-12-09 10:56:43.952486] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:27.095  [2024-12-09 10:56:43.952591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119502 ]
00:06:27.354  [2024-12-09 10:56:44.115910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:27.354  [2024-12-09 10:56:44.318777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:28.741   10:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:28.741   10:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:28.741   10:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 119502
00:06:28.741   10:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 119502
00:06:28.741   10:56:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:29.309  lslocks: write error
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 119294
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 119294 ']'
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 119294
00:06:29.309    10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:29.309    10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119294
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119294'
00:06:29.309  killing process with pid 119294
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 119294
00:06:29.309   10:56:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 119294
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 119502
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 119502 ']'
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 119502
00:06:33.501    10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:33.501    10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119502
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119502'
00:06:33.501  killing process with pid 119502
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 119502
00:06:33.501   10:56:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 119502
00:06:35.408  
00:06:35.408  real	0m9.144s
00:06:35.408  user	0m9.170s
00:06:35.408  sys	0m1.340s
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:35.408  ************************************
00:06:35.408  END TEST locking_app_on_unlocked_coremask
00:06:35.408  ************************************
00:06:35.408   10:56:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:06:35.408   10:56:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:35.408   10:56:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:35.408   10:56:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:35.408  ************************************
00:06:35.408  START TEST locking_app_on_locked_coremask
00:06:35.408  ************************************
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=120990
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 120990 /var/tmp/spdk.sock
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 120990 ']'
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:35.408  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:35.408   10:56:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:35.408  [2024-12-09 10:56:52.072857] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:35.408  [2024-12-09 10:56:52.072964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120990 ]
00:06:35.408  [2024-12-09 10:56:52.191718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:35.408  [2024-12-09 10:56:52.292109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:36.345   10:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:36.345   10:56:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=121198
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 121198 /var/tmp/spdk2.sock
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 121198 /var/tmp/spdk2.sock
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:36.345    10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 121198 /var/tmp/spdk2.sock
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 121198 ']'
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:36.345  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:36.345   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:36.345  [2024-12-09 10:56:53.104539] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:36.345  [2024-12-09 10:56:53.104647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121198 ]
00:06:36.345  [2024-12-09 10:56:53.262985] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 120990 has claimed it.
00:06:36.345  [2024-12-09 10:56:53.263055] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:06:36.912  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (121198) - No such process
00:06:36.912  ERROR: process (pid: 121198) is no longer running
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 120990
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 120990
00:06:36.913   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:37.172  lslocks: write error
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 120990
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 120990 ']'
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 120990
00:06:37.172    10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:37.172    10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 120990
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 120990'
00:06:37.172  killing process with pid 120990
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 120990
00:06:37.172   10:56:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 120990
00:06:39.076  
00:06:39.076  real	0m3.900s
00:06:39.076  user	0m4.080s
00:06:39.076  sys	0m0.809s
00:06:39.076   10:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:39.076   10:56:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:39.076  ************************************
00:06:39.076  END TEST locking_app_on_locked_coremask
00:06:39.076  ************************************
00:06:39.076   10:56:55 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:06:39.076   10:56:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:39.076   10:56:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:39.076   10:56:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:39.076  ************************************
00:06:39.076  START TEST locking_overlapped_coremask
00:06:39.076  ************************************
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=121685
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 121685 /var/tmp/spdk.sock
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 121685 ']'
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:39.077  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:39.077   10:56:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:39.077  [2024-12-09 10:56:56.017265] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:39.077  [2024-12-09 10:56:56.017386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121685 ]
00:06:39.336  [2024-12-09 10:56:56.132960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:39.336  [2024-12-09 10:56:56.234320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:39.336  [2024-12-09 10:56:56.234338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:39.336  [2024-12-09 10:56:56.234350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=121861
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 121861 /var/tmp/spdk2.sock
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 121861 /var/tmp/spdk2.sock
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:40.273    10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 121861 /var/tmp/spdk2.sock
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 121861 ']'
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:40.273  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:40.273   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:40.273  [2024-12-09 10:56:57.105313] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:40.273  [2024-12-09 10:56:57.105413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121861 ]
00:06:40.533  [2024-12-09 10:56:57.287020] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 121685 has claimed it.
00:06:40.533  [2024-12-09 10:56:57.287110] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:06:40.792  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (121861) - No such process
00:06:40.792  ERROR: process (pid: 121861) is no longer running
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 121685
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 121685 ']'
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 121685
00:06:40.792    10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:40.792    10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121685
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121685'
00:06:40.792  killing process with pid 121685
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 121685
00:06:40.792   10:56:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 121685
00:06:43.326  
00:06:43.326  real	0m3.893s
00:06:43.326  user	0m10.540s
00:06:43.326  sys	0m0.728s
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:06:43.326  ************************************
00:06:43.326  END TEST locking_overlapped_coremask
00:06:43.326  ************************************
00:06:43.326   10:56:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:06:43.326   10:56:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:43.326   10:56:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:43.326   10:56:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:43.326  ************************************
00:06:43.326  START TEST locking_overlapped_coremask_via_rpc
00:06:43.326  ************************************
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=122500
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 122500 /var/tmp/spdk.sock
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122500 ']'
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:43.326   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:43.327   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:43.327  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:43.327   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:43.327   10:56:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:43.327  [2024-12-09 10:56:59.954511] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:43.327  [2024-12-09 10:56:59.954617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122500 ]
00:06:43.327  [2024-12-09 10:57:00.082404] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:43.327  [2024-12-09 10:57:00.082455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:43.327  [2024-12-09 10:57:00.203675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:43.327  [2024-12-09 10:57:00.203715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.327  [2024-12-09 10:57:00.203735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=122778
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 122778 /var/tmp/spdk2.sock
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122778 ']'
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:44.265  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:44.265   10:57:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:44.265  [2024-12-09 10:57:01.114072] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:44.265  [2024-12-09 10:57:01.114193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122778 ]
00:06:44.524  [2024-12-09 10:57:01.293295] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:44.524  [2024-12-09 10:57:01.293339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:06:44.524  [2024-12-09 10:57:01.512315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:44.524  [2024-12-09 10:57:01.515884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:44.524  [2024-12-09 10:57:01.515904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:47.059    10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.059  [2024-12-09 10:57:03.711922] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 122500 has claimed it.
00:06:47.059  request:
00:06:47.059  {
00:06:47.059  "method": "framework_enable_cpumask_locks",
00:06:47.059  "req_id": 1
00:06:47.059  }
00:06:47.059  Got JSON-RPC error response
00:06:47.059  response:
00:06:47.059  {
00:06:47.059  "code": -32603,
00:06:47.059  "message": "Failed to claim CPU core: 2"
00:06:47.059  }
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:47.059   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 122500 /var/tmp/spdk.sock
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122500 ']'
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:47.060  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 122778 /var/tmp/spdk2.sock
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 122778 ']'
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:47.060  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:47.060   10:57:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:06:47.319  
00:06:47.319  real	0m4.278s
00:06:47.319  user	0m1.321s
00:06:47.319  sys	0m0.201s
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:47.319   10:57:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:47.319  ************************************
00:06:47.319  END TEST locking_overlapped_coremask_via_rpc
00:06:47.319  ************************************
00:06:47.319   10:57:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:06:47.319   10:57:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 122500 ]]
00:06:47.319   10:57:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 122500
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122500 ']'
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122500
00:06:47.319    10:57:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:47.319    10:57:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122500
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122500'
00:06:47.319  killing process with pid 122500
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 122500
00:06:47.319   10:57:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 122500
00:06:49.857   10:57:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 122778 ]]
00:06:49.857   10:57:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 122778
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122778 ']'
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122778
00:06:49.857    10:57:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:49.857    10:57:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122778
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122778'
00:06:49.857  killing process with pid 122778
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 122778
00:06:49.857   10:57:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 122778
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 122500 ]]
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 122500
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122500 ']'
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122500
00:06:51.760  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (122500) - No such process
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 122500 is not found'
00:06:51.760  Process with pid 122500 is not found
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 122778 ]]
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 122778
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 122778 ']'
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 122778
00:06:51.760  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (122778) - No such process
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 122778 is not found'
00:06:51.760  Process with pid 122778 is not found
00:06:51.760   10:57:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:06:51.760  
00:06:51.760  real	0m41.567s
00:06:51.760  user	1m13.594s
00:06:51.760  sys	0m6.977s
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:51.760   10:57:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:06:51.760  ************************************
00:06:51.760  END TEST cpu_locks
00:06:51.760  ************************************
00:06:51.760  
00:06:51.760  real	1m11.162s
00:06:51.760  user	2m11.424s
00:06:51.760  sys	0m10.780s
00:06:51.760   10:57:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:51.760   10:57:08 event -- common/autotest_common.sh@10 -- # set +x
00:06:51.760  ************************************
00:06:51.760  END TEST event
00:06:51.760  ************************************
00:06:51.760   10:57:08  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:06:51.760   10:57:08  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:51.760   10:57:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:51.760   10:57:08  -- common/autotest_common.sh@10 -- # set +x
00:06:51.760  ************************************
00:06:51.760  START TEST thread
00:06:51.760  ************************************
00:06:51.760   10:57:08 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh
00:06:51.760  * Looking for test storage...
00:06:51.760  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread
00:06:51.760    10:57:08 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:51.760     10:57:08 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:51.760     10:57:08 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:06:51.760    10:57:08 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:51.760    10:57:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:51.761    10:57:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:51.761    10:57:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:51.761    10:57:08 thread -- scripts/common.sh@336 -- # IFS=.-:
00:06:51.761    10:57:08 thread -- scripts/common.sh@336 -- # read -ra ver1
00:06:51.761    10:57:08 thread -- scripts/common.sh@337 -- # IFS=.-:
00:06:51.761    10:57:08 thread -- scripts/common.sh@337 -- # read -ra ver2
00:06:51.761    10:57:08 thread -- scripts/common.sh@338 -- # local 'op=<'
00:06:51.761    10:57:08 thread -- scripts/common.sh@340 -- # ver1_l=2
00:06:51.761    10:57:08 thread -- scripts/common.sh@341 -- # ver2_l=1
00:06:51.761    10:57:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:51.761    10:57:08 thread -- scripts/common.sh@344 -- # case "$op" in
00:06:51.761    10:57:08 thread -- scripts/common.sh@345 -- # : 1
00:06:51.761    10:57:08 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:51.761    10:57:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:51.761     10:57:08 thread -- scripts/common.sh@365 -- # decimal 1
00:06:51.761     10:57:08 thread -- scripts/common.sh@353 -- # local d=1
00:06:51.761     10:57:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:51.761     10:57:08 thread -- scripts/common.sh@355 -- # echo 1
00:06:51.761    10:57:08 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:06:51.761     10:57:08 thread -- scripts/common.sh@366 -- # decimal 2
00:06:51.761     10:57:08 thread -- scripts/common.sh@353 -- # local d=2
00:06:51.761     10:57:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:51.761     10:57:08 thread -- scripts/common.sh@355 -- # echo 2
00:06:51.761    10:57:08 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:06:51.761    10:57:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:51.761    10:57:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:51.761    10:57:08 thread -- scripts/common.sh@368 -- # return 0
00:06:51.761    10:57:08 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:51.761    10:57:08 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:51.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.761  		--rc genhtml_branch_coverage=1
00:06:51.761  		--rc genhtml_function_coverage=1
00:06:51.761  		--rc genhtml_legend=1
00:06:51.761  		--rc geninfo_all_blocks=1
00:06:51.761  		--rc geninfo_unexecuted_blocks=1
00:06:51.761  		
00:06:51.761  		'
00:06:51.761    10:57:08 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:51.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.761  		--rc genhtml_branch_coverage=1
00:06:51.761  		--rc genhtml_function_coverage=1
00:06:51.761  		--rc genhtml_legend=1
00:06:51.761  		--rc geninfo_all_blocks=1
00:06:51.761  		--rc geninfo_unexecuted_blocks=1
00:06:51.761  		
00:06:51.761  		'
00:06:51.761    10:57:08 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:51.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.761  		--rc genhtml_branch_coverage=1
00:06:51.761  		--rc genhtml_function_coverage=1
00:06:51.761  		--rc genhtml_legend=1
00:06:51.761  		--rc geninfo_all_blocks=1
00:06:51.761  		--rc geninfo_unexecuted_blocks=1
00:06:51.761  		
00:06:51.761  		'
00:06:51.761    10:57:08 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:51.761  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:51.761  		--rc genhtml_branch_coverage=1
00:06:51.761  		--rc genhtml_function_coverage=1
00:06:51.761  		--rc genhtml_legend=1
00:06:51.761  		--rc geninfo_all_blocks=1
00:06:51.761  		--rc geninfo_unexecuted_blocks=1
00:06:51.761  		
00:06:51.761  		'
00:06:51.761   10:57:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:06:51.761   10:57:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:06:51.761   10:57:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:51.761   10:57:08 thread -- common/autotest_common.sh@10 -- # set +x
00:06:51.761  ************************************
00:06:51.761  START TEST thread_poller_perf
00:06:51.761  ************************************
00:06:51.761   10:57:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:06:51.761  [2024-12-09 10:57:08.670922] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:51.761  [2024-12-09 10:57:08.671014] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124555 ]
00:06:52.019  [2024-12-09 10:57:08.804891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:52.019  [2024-12-09 10:57:08.917380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.020  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:06:53.395  
[2024-12-09T09:57:10.406Z]  ======================================
00:06:53.395  
[2024-12-09T09:57:10.406Z]  busy:2207027928 (cyc)
00:06:53.395  
[2024-12-09T09:57:10.406Z]  total_run_count: 384000
00:06:53.395  
[2024-12-09T09:57:10.406Z]  tsc_hz: 2200000000 (cyc)
00:06:53.395  
[2024-12-09T09:57:10.406Z]  ======================================
00:06:53.395  
[2024-12-09T09:57:10.406Z]  poller_cost: 5747 (cyc), 2612 (nsec)
00:06:53.395  
00:06:53.395  real	0m1.581s
00:06:53.395  user	0m1.441s
00:06:53.395  sys	0m0.133s
00:06:53.395   10:57:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:53.395   10:57:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:06:53.395  ************************************
00:06:53.395  END TEST thread_poller_perf
00:06:53.395  ************************************
00:06:53.395   10:57:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:06:53.395   10:57:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:06:53.395   10:57:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:53.395   10:57:10 thread -- common/autotest_common.sh@10 -- # set +x
00:06:53.395  ************************************
00:06:53.395  START TEST thread_poller_perf
00:06:53.395  ************************************
00:06:53.395   10:57:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:06:53.395  [2024-12-09 10:57:10.308077] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:53.395  [2024-12-09 10:57:10.308168] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124985 ]
00:06:53.653  [2024-12-09 10:57:10.420565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:53.653  [2024-12-09 10:57:10.516873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:53.653  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:06:55.032  
[2024-12-09T09:57:12.043Z]  ======================================
00:06:55.032  
[2024-12-09T09:57:12.043Z]  busy:2203542558 (cyc)
00:06:55.032  
[2024-12-09T09:57:12.043Z]  total_run_count: 4378000
00:06:55.032  
[2024-12-09T09:57:12.043Z]  tsc_hz: 2200000000 (cyc)
00:06:55.032  
[2024-12-09T09:57:12.043Z]  ======================================
00:06:55.032  
[2024-12-09T09:57:12.043Z]  poller_cost: 503 (cyc), 228 (nsec)
00:06:55.032  
00:06:55.032  real	0m1.518s
00:06:55.032  user	0m1.377s
00:06:55.032  sys	0m0.133s
00:06:55.032   10:57:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.032   10:57:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:06:55.032  ************************************
00:06:55.032  END TEST thread_poller_perf
00:06:55.032  ************************************
00:06:55.032   10:57:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:06:55.032  
00:06:55.032  real	0m3.318s
00:06:55.032  user	0m2.938s
00:06:55.032  sys	0m0.378s
00:06:55.032   10:57:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.032   10:57:11 thread -- common/autotest_common.sh@10 -- # set +x
00:06:55.032  ************************************
00:06:55.032  END TEST thread
00:06:55.032  ************************************
00:06:55.032   10:57:11  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:06:55.032   10:57:11  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:06:55.032   10:57:11  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:55.032   10:57:11  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.032   10:57:11  -- common/autotest_common.sh@10 -- # set +x
00:06:55.032  ************************************
00:06:55.032  START TEST app_cmdline
00:06:55.032  ************************************
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh
00:06:55.032  * Looking for test storage...
00:06:55.032  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:55.032     10:57:11 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:06:55.032     10:57:11 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@345 -- # : 1
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:55.032     10:57:11 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:55.032    10:57:11 app_cmdline -- scripts/common.sh@368 -- # return 0
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:55.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.032  		--rc genhtml_branch_coverage=1
00:06:55.032  		--rc genhtml_function_coverage=1
00:06:55.032  		--rc genhtml_legend=1
00:06:55.032  		--rc geninfo_all_blocks=1
00:06:55.032  		--rc geninfo_unexecuted_blocks=1
00:06:55.032  		
00:06:55.032  		'
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:55.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.032  		--rc genhtml_branch_coverage=1
00:06:55.032  		--rc genhtml_function_coverage=1
00:06:55.032  		--rc genhtml_legend=1
00:06:55.032  		--rc geninfo_all_blocks=1
00:06:55.032  		--rc geninfo_unexecuted_blocks=1
00:06:55.032  		
00:06:55.032  		'
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:55.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.032  		--rc genhtml_branch_coverage=1
00:06:55.032  		--rc genhtml_function_coverage=1
00:06:55.032  		--rc genhtml_legend=1
00:06:55.032  		--rc geninfo_all_blocks=1
00:06:55.032  		--rc geninfo_unexecuted_blocks=1
00:06:55.032  		
00:06:55.032  		'
00:06:55.032    10:57:11 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:55.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:55.032  		--rc genhtml_branch_coverage=1
00:06:55.032  		--rc genhtml_function_coverage=1
00:06:55.032  		--rc genhtml_legend=1
00:06:55.032  		--rc geninfo_all_blocks=1
00:06:55.032  		--rc geninfo_unexecuted_blocks=1
00:06:55.032  		
00:06:55.032  		'
00:06:55.032   10:57:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:06:55.032   10:57:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:06:55.032   10:57:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=125263
00:06:55.032   10:57:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 125263
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 125263 ']'
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:55.032  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:55.032   10:57:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:06:55.291  [2024-12-09 10:57:12.070645] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:06:55.292  [2024-12-09 10:57:12.070768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125263 ]
00:06:55.292  [2024-12-09 10:57:12.183069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:55.292  [2024-12-09 10:57:12.280510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.228   10:57:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:56.228   10:57:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:06:56.228   10:57:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:06:56.228  {
00:06:56.228    "version": "SPDK v25.01-pre git sha1 04ba75cf7",
00:06:56.228    "fields": {
00:06:56.228      "major": 25,
00:06:56.228      "minor": 1,
00:06:56.228      "patch": 0,
00:06:56.228      "suffix": "-pre",
00:06:56.228      "commit": "04ba75cf7"
00:06:56.228    }
00:06:56.228  }
00:06:56.228   10:57:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:06:56.228   10:57:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:06:56.228   10:57:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:06:56.228   10:57:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:06:56.228    10:57:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:06:56.228    10:57:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:56.228    10:57:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:06:56.228    10:57:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:06:56.229    10:57:13 app_cmdline -- app/cmdline.sh@26 -- # sort
00:06:56.229    10:57:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:56.487   10:57:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:06:56.487   10:57:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:06:56.487   10:57:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:56.487    10:57:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:56.487    10:57:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:06:56.487  request:
00:06:56.487  {
00:06:56.487    "method": "env_dpdk_get_mem_stats",
00:06:56.487    "req_id": 1
00:06:56.487  }
00:06:56.487  Got JSON-RPC error response
00:06:56.487  response:
00:06:56.487  {
00:06:56.487    "code": -32601,
00:06:56.487    "message": "Method not found"
00:06:56.487  }
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:56.487   10:57:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 125263
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 125263 ']'
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 125263
00:06:56.487    10:57:13 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:06:56.487   10:57:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:56.487    10:57:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125263
00:06:56.746   10:57:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:56.746   10:57:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:56.746   10:57:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125263'
00:06:56.746  killing process with pid 125263
00:06:56.746   10:57:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 125263
00:06:56.746   10:57:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 125263
00:06:58.651  
00:06:58.651  real	0m3.549s
00:06:58.651  user	0m3.829s
00:06:58.651  sys	0m0.623s
00:06:58.651   10:57:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.651   10:57:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:06:58.651  ************************************
00:06:58.651  END TEST app_cmdline
00:06:58.651  ************************************
00:06:58.651   10:57:15  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:06:58.651   10:57:15  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.651   10:57:15  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.651   10:57:15  -- common/autotest_common.sh@10 -- # set +x
00:06:58.651  ************************************
00:06:58.651  START TEST version
00:06:58.651  ************************************
00:06:58.651   10:57:15 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh
00:06:58.651  * Looking for test storage...
00:06:58.651  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app
00:06:58.651    10:57:15 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:58.651     10:57:15 version -- common/autotest_common.sh@1711 -- # lcov --version
00:06:58.651     10:57:15 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:58.651    10:57:15 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:58.651    10:57:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:58.651    10:57:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:58.651    10:57:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:58.651    10:57:15 version -- scripts/common.sh@336 -- # IFS=.-:
00:06:58.651    10:57:15 version -- scripts/common.sh@336 -- # read -ra ver1
00:06:58.651    10:57:15 version -- scripts/common.sh@337 -- # IFS=.-:
00:06:58.651    10:57:15 version -- scripts/common.sh@337 -- # read -ra ver2
00:06:58.651    10:57:15 version -- scripts/common.sh@338 -- # local 'op=<'
00:06:58.651    10:57:15 version -- scripts/common.sh@340 -- # ver1_l=2
00:06:58.651    10:57:15 version -- scripts/common.sh@341 -- # ver2_l=1
00:06:58.651    10:57:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:58.651    10:57:15 version -- scripts/common.sh@344 -- # case "$op" in
00:06:58.651    10:57:15 version -- scripts/common.sh@345 -- # : 1
00:06:58.651    10:57:15 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:58.651    10:57:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:58.651     10:57:15 version -- scripts/common.sh@365 -- # decimal 1
00:06:58.651     10:57:15 version -- scripts/common.sh@353 -- # local d=1
00:06:58.651     10:57:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:58.651     10:57:15 version -- scripts/common.sh@355 -- # echo 1
00:06:58.651    10:57:15 version -- scripts/common.sh@365 -- # ver1[v]=1
00:06:58.651     10:57:15 version -- scripts/common.sh@366 -- # decimal 2
00:06:58.651     10:57:15 version -- scripts/common.sh@353 -- # local d=2
00:06:58.651     10:57:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:58.651     10:57:15 version -- scripts/common.sh@355 -- # echo 2
00:06:58.651    10:57:15 version -- scripts/common.sh@366 -- # ver2[v]=2
00:06:58.651    10:57:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:58.651    10:57:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:58.651    10:57:15 version -- scripts/common.sh@368 -- # return 0
00:06:58.651    10:57:15 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:58.651    10:57:15 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:58.651  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.651  		--rc genhtml_branch_coverage=1
00:06:58.651  		--rc genhtml_function_coverage=1
00:06:58.651  		--rc genhtml_legend=1
00:06:58.651  		--rc geninfo_all_blocks=1
00:06:58.651  		--rc geninfo_unexecuted_blocks=1
00:06:58.651  		
00:06:58.651  		'
00:06:58.651    10:57:15 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:58.652  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.652  		--rc genhtml_branch_coverage=1
00:06:58.652  		--rc genhtml_function_coverage=1
00:06:58.652  		--rc genhtml_legend=1
00:06:58.652  		--rc geninfo_all_blocks=1
00:06:58.652  		--rc geninfo_unexecuted_blocks=1
00:06:58.652  		
00:06:58.652  		'
00:06:58.652    10:57:15 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:58.652  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.652  		--rc genhtml_branch_coverage=1
00:06:58.652  		--rc genhtml_function_coverage=1
00:06:58.652  		--rc genhtml_legend=1
00:06:58.652  		--rc geninfo_all_blocks=1
00:06:58.652  		--rc geninfo_unexecuted_blocks=1
00:06:58.652  		
00:06:58.652  		'
00:06:58.652    10:57:15 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:58.652  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.652  		--rc genhtml_branch_coverage=1
00:06:58.652  		--rc genhtml_function_coverage=1
00:06:58.652  		--rc genhtml_legend=1
00:06:58.652  		--rc geninfo_all_blocks=1
00:06:58.652  		--rc geninfo_unexecuted_blocks=1
00:06:58.652  		
00:06:58.652  		'
00:06:58.652    10:57:15 version -- app/version.sh@17 -- # get_header_version major
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # tr -d '"'
00:06:58.652    10:57:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # cut -f2
00:06:58.652   10:57:15 version -- app/version.sh@17 -- # major=25
00:06:58.652    10:57:15 version -- app/version.sh@18 -- # get_header_version minor
00:06:58.652    10:57:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # cut -f2
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # tr -d '"'
00:06:58.652   10:57:15 version -- app/version.sh@18 -- # minor=1
00:06:58.652    10:57:15 version -- app/version.sh@19 -- # get_header_version patch
00:06:58.652    10:57:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # tr -d '"'
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # cut -f2
00:06:58.652   10:57:15 version -- app/version.sh@19 -- # patch=0
00:06:58.652    10:57:15 version -- app/version.sh@20 -- # get_header_version suffix
00:06:58.652    10:57:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # cut -f2
00:06:58.652    10:57:15 version -- app/version.sh@14 -- # tr -d '"'
00:06:58.652   10:57:15 version -- app/version.sh@20 -- # suffix=-pre
00:06:58.652   10:57:15 version -- app/version.sh@22 -- # version=25.1
00:06:58.652   10:57:15 version -- app/version.sh@25 -- # (( patch != 0 ))
00:06:58.652   10:57:15 version -- app/version.sh@28 -- # version=25.1rc0
00:06:58.652   10:57:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python
00:06:58.652    10:57:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:06:58.652   10:57:15 version -- app/version.sh@30 -- # py_version=25.1rc0
00:06:58.652   10:57:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:06:58.652  
00:06:58.652  real	0m0.168s
00:06:58.652  user	0m0.120s
00:06:58.652  sys	0m0.073s
00:06:58.652   10:57:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.652   10:57:15 version -- common/autotest_common.sh@10 -- # set +x
00:06:58.652  ************************************
00:06:58.652  END TEST version
00:06:58.652  ************************************
00:06:58.652   10:57:15  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:06:58.652   10:57:15  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:06:58.652    10:57:15  -- spdk/autotest.sh@194 -- # uname -s
00:06:58.652   10:57:15  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:06:58.652   10:57:15  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:06:58.652   10:57:15  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:06:58.652   10:57:15  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:06:58.652   10:57:15  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:06:58.652   10:57:15  -- spdk/autotest.sh@260 -- # timing_exit lib
00:06:58.652   10:57:15  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:58.652   10:57:15  -- common/autotest_common.sh@10 -- # set +x
00:06:58.911   10:57:15  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:06:58.911   10:57:15  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:06:58.911   10:57:15  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:06:58.911   10:57:15  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:06:58.911   10:57:15  -- spdk/autotest.sh@315 -- # '[' 1 -eq 1 ']'
00:06:58.911   10:57:15  -- spdk/autotest.sh@316 -- # HUGENODE=0
00:06:58.911   10:57:15  -- spdk/autotest.sh@316 -- # run_test vfio_user_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:06:58.911   10:57:15  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:58.911   10:57:15  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.911   10:57:15  -- common/autotest_common.sh@10 -- # set +x
00:06:58.911  ************************************
00:06:58.911  START TEST vfio_user_qemu
00:06:58.911  ************************************
00:06:58.911   10:57:15 vfio_user_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso
00:06:58.911  * Looking for test storage...
00:06:58.911  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:06:58.911    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:58.911     10:57:15 vfio_user_qemu -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:58.911     10:57:15 vfio_user_qemu -- common/autotest_common.sh@1711 -- # lcov --version
00:06:58.911    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@344 -- # case "$op" in
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@345 -- # : 1
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@365 -- # decimal 1
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@353 -- # local d=1
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@355 -- # echo 1
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@366 -- # decimal 2
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@353 -- # local d=2
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:58.911     10:57:15 vfio_user_qemu -- scripts/common.sh@355 -- # echo 2
00:06:58.911    10:57:15 vfio_user_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:06:58.912    10:57:15 vfio_user_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:58.912    10:57:15 vfio_user_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:58.912    10:57:15 vfio_user_qemu -- scripts/common.sh@368 -- # return 0
00:06:58.912    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:58.912    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:58.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.912  		--rc genhtml_branch_coverage=1
00:06:58.912  		--rc genhtml_function_coverage=1
00:06:58.912  		--rc genhtml_legend=1
00:06:58.912  		--rc geninfo_all_blocks=1
00:06:58.912  		--rc geninfo_unexecuted_blocks=1
00:06:58.912  		
00:06:58.912  		'
00:06:58.912    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:58.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.912  		--rc genhtml_branch_coverage=1
00:06:58.912  		--rc genhtml_function_coverage=1
00:06:58.912  		--rc genhtml_legend=1
00:06:58.912  		--rc geninfo_all_blocks=1
00:06:58.912  		--rc geninfo_unexecuted_blocks=1
00:06:58.912  		
00:06:58.912  		'
00:06:58.912    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:58.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.912  		--rc genhtml_branch_coverage=1
00:06:58.912  		--rc genhtml_function_coverage=1
00:06:58.912  		--rc genhtml_legend=1
00:06:58.912  		--rc geninfo_all_blocks=1
00:06:58.912  		--rc geninfo_unexecuted_blocks=1
00:06:58.912  		
00:06:58.912  		'
00:06:58.912    10:57:15 vfio_user_qemu -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:58.912  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.912  		--rc genhtml_branch_coverage=1
00:06:58.912  		--rc genhtml_function_coverage=1
00:06:58.912  		--rc genhtml_legend=1
00:06:58.912  		--rc geninfo_all_blocks=1
00:06:58.912  		--rc geninfo_unexecuted_blocks=1
00:06:58.912  		
00:06:58.912  		'
00:06:58.912   10:57:15 vfio_user_qemu -- vfio_user/vfio_user.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:06:58.912    10:57:15 vfio_user_qemu -- vfio_user/common.sh@6 -- # : 128
00:06:58.912    10:57:15 vfio_user_qemu -- vfio_user/common.sh@7 -- # : 512
00:06:58.912    10:57:15 vfio_user_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@6 -- # : false
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@9 -- # : qemu-img
00:06:58.912      10:57:15 vfio_user_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:06:58.912       10:57:15 vfio_user_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh
00:06:58.912      10:57:15 vfio_user_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:06:58.912      10:57:15 vfio_user_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:06:58.912     10:57:15 vfio_user_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:06:58.912      10:57:15 vfio_user_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:06:58.912      10:57:15 vfio_user_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:06:58.912      10:57:15 vfio_user_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:06:58.912      10:57:15 vfio_user_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:06:58.912      10:57:15 vfio_user_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:06:58.912      10:57:15 vfio_user_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:06:58.912       10:57:15 vfio_user_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:06:58.912        10:57:15 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:06:58.912        10:57:15 vfio_user_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:06:58.912        10:57:15 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:06:58.912        10:57:15 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:06:58.912       10:57:15 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:06:58.912    10:57:15 vfio_user_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:58.912    10:57:15 vfio_user_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:06:58.912    10:57:15 vfio_user_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:58.912   10:57:15 vfio_user_qemu -- vfio_user/vfio_user.sh@11 -- # echo 'Running SPDK vfio-user fio autotest...'
00:06:58.912  Running SPDK vfio-user fio autotest...
00:06:58.912   10:57:15 vfio_user_qemu -- vfio_user/vfio_user.sh@13 -- # vhosttestinit
00:06:58.912   10:57:15 vfio_user_qemu -- vhost/common.sh@37 -- # '[' iso == iso ']'
00:06:58.912   10:57:15 vfio_user_qemu -- vhost/common.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh
00:07:00.288  0000:00:04.7 (8086 6f27): Already using the vfio-pci driver
00:07:00.288  0000:00:04.6 (8086 6f26): Already using the vfio-pci driver
00:07:00.288  0000:00:04.5 (8086 6f25): Already using the vfio-pci driver
00:07:00.288  0000:00:04.4 (8086 6f24): Already using the vfio-pci driver
00:07:00.288  0000:00:04.3 (8086 6f23): Already using the vfio-pci driver
00:07:00.288  0000:00:04.2 (8086 6f22): Already using the vfio-pci driver
00:07:00.288  0000:00:04.1 (8086 6f21): Already using the vfio-pci driver
00:07:00.288  0000:00:04.0 (8086 6f20): Already using the vfio-pci driver
00:07:00.288  0000:80:04.7 (8086 6f27): Already using the vfio-pci driver
00:07:00.288  0000:80:04.6 (8086 6f26): Already using the vfio-pci driver
00:07:00.288  0000:80:04.5 (8086 6f25): Already using the vfio-pci driver
00:07:00.288  0000:80:04.4 (8086 6f24): Already using the vfio-pci driver
00:07:00.288  0000:80:04.3 (8086 6f23): Already using the vfio-pci driver
00:07:00.288  0000:80:04.2 (8086 6f22): Already using the vfio-pci driver
00:07:00.288  0000:80:04.1 (8086 6f21): Already using the vfio-pci driver
00:07:00.288  0000:80:04.0 (8086 6f20): Already using the vfio-pci driver
00:07:00.288  0000:0d:00.0 (8086 0a54): Already using the vfio-pci driver
00:07:00.288   10:57:17 vfio_user_qemu -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:07:00.288   10:57:17 vfio_user_qemu -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:00.288   10:57:17 vfio_user_qemu -- vhost/common.sh@42 -- # gzip -dc /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz
00:07:18.375   10:57:32 vfio_user_qemu -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:18.375   10:57:32 vfio_user_qemu -- vfio_user/vfio_user.sh@15 -- # run_test vfio_user_nvme_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:18.375   10:57:32 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:18.375   10:57:32 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:18.375   10:57:32 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:07:18.375  ************************************
00:07:18.375  START TEST vfio_user_nvme_fio
00:07:18.375  ************************************
00:07:18.375   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:18.375  * Looking for test storage...
00:07:18.375  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # lcov --version
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # IFS=.-:
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # read -ra ver1
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # IFS=.-:
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # read -ra ver2
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@338 -- # local 'op=<'
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@340 -- # ver1_l=2
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@341 -- # ver2_l=1
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@344 -- # case "$op" in
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@345 -- # : 1
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # decimal 1
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=1
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 1
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # decimal 2
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=2
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:18.375     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 2
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # return 0
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:18.375  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.375  		--rc genhtml_branch_coverage=1
00:07:18.375  		--rc genhtml_function_coverage=1
00:07:18.375  		--rc genhtml_legend=1
00:07:18.375  		--rc geninfo_all_blocks=1
00:07:18.375  		--rc geninfo_unexecuted_blocks=1
00:07:18.375  		
00:07:18.375  		'
00:07:18.375    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:18.375  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.375  		--rc genhtml_branch_coverage=1
00:07:18.376  		--rc genhtml_function_coverage=1
00:07:18.376  		--rc genhtml_legend=1
00:07:18.376  		--rc geninfo_all_blocks=1
00:07:18.376  		--rc geninfo_unexecuted_blocks=1
00:07:18.376  		
00:07:18.376  		'
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:18.376  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.376  		--rc genhtml_branch_coverage=1
00:07:18.376  		--rc genhtml_function_coverage=1
00:07:18.376  		--rc genhtml_legend=1
00:07:18.376  		--rc geninfo_all_blocks=1
00:07:18.376  		--rc geninfo_unexecuted_blocks=1
00:07:18.376  		
00:07:18.376  		'
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:18.376  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:18.376  		--rc genhtml_branch_coverage=1
00:07:18.376  		--rc genhtml_function_coverage=1
00:07:18.376  		--rc genhtml_legend=1
00:07:18.376  		--rc geninfo_all_blocks=1
00:07:18.376  		--rc geninfo_unexecuted_blocks=1
00:07:18.376  		
00:07:18.376  		'
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@6 -- # : 128
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@7 -- # : 512
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@6 -- # : false
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@9 -- # : qemu-img
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:07:18.376       10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:07:18.376     10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:07:18.376      10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:07:18.376       10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:07:18.376        10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:07:18.376        10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:07:18.376        10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:07:18.376        10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:07:18.376       10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # get_vhost_dir 0
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:07:18.376    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@15 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@16 -- # vm_no=2
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@18 -- # trap clean_vfio_user EXIT
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@19 -- # vhosttestinit
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@21 -- # timing_enter start_vfio_user
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:18.376   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@22 -- # vfio_user_run 0
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@11 -- # local vhost_name=0
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:07:18.377    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # get_vhost_dir 0
00:07:18.377    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:07:18.377    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:07:18.377    10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@22 -- # nvmfpid=129606
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@23 -- # echo 129606
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@25 -- # echo 'Process pid: 129606'
00:07:18.377  Process pid: 129606
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:07:18.377  waiting for app to run...
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@27 -- # waitforlisten 129606 /root/vhost_test/vhost/0/rpc.sock
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@835 -- # '[' -z 129606 ']'
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:07:18.377  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:18.377   10:57:32 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:18.377  [2024-12-09 10:57:32.795115] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:07:18.377  [2024-12-09 10:57:32.795274] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129606 ]
00:07:18.377  EAL: No free 2048 kB hugepages reported on node 1
00:07:18.377  [2024-12-09 10:57:33.044923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:18.377  [2024-12-09 10:57:33.137709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:18.377  [2024-12-09 10:57:33.137815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:18.377  [2024-12-09 10:57:33.137833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:18.377  [2024-12-09 10:57:33.137861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@868 -- # return 0
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:18.377    10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # seq 0 2
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/0/muser
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/0/muser/domain/muser0/0
00:07:18.377   10:57:33 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode0 -s SPDK000 -a
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc0
00:07:18.377  Malloc0
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/1/muser
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:07:18.377   10:57:34 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:07:18.377   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:18.377   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc1
00:07:18.377  Malloc1
00:07:18.377   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:07:18.636   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:07:18.894   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no)
00:07:18.894   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:07:18.894   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/2/muser
00:07:18.894   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/2/muser/domain/muser2/2
00:07:18.894   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -s SPDK002 -a
00:07:19.153   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no ))
00:07:19.153   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:07:19.153   10:57:35 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config
00:07:22.439   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@35 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Nvme0n1
00:07:22.439   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@43 -- # timing_exit start_vfio_user
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@45 -- # used_vms=
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@46 -- # timing_enter launch_vms
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:22.699    10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # seq 0 2
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=0
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:22.699  WARN: removing existing VM in '/root/vhost_test/vms/0'
00:07:22.699  INFO: Creating new VM in /root/vhost_test/vms/0
00:07:22.699  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:22.699  INFO: TASK MASK: 4-5
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:22.699  INFO: NUMA NODE: 0
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl'
00:07:22.699  INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 0 == '' ]]
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.699   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:07:22.700  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:22.700    10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 4-5 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/0/muser/domain/muser0/0/cntrl
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10000
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10001
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10002
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10004
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 100
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 0'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=1
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:22.700  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:07:22.700  INFO: Creating new VM in /root/vhost_test/vms/1
00:07:22.700  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:22.700  INFO: TASK MASK: 6-7
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:22.700  INFO: NUMA NODE: 0
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:07:22.700  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:07:22.700  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:22.700    10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10100
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10101
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10102
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10104
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 101
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 1'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no)
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=2 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=2
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@518 -- # xtrace_disable
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:22.700  WARN: removing existing VM in '/root/vhost_test/vms/2'
00:07:22.700  INFO: Creating new VM in /root/vhost_test/vms/2
00:07:22.700  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:07:22.700  INFO: TASK MASK: 8-9
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # local node_num=0
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:07:22.700   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:07:22.701  INFO: NUMA NODE: 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # IFS=,
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@702 -- # disk_type=vfio_user
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@704 -- # case $disk_type in
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl'
00:07:22.701  INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@760 -- # [[ 2 == '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@785 -- # (( 0 ))
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/2/run.sh'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/2/run.sh'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/2/run.sh'
00:07:22.701  INFO: Saving to /root/vhost_test/vms/2/run.sh
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # cat
00:07:22.701    10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 8-9 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :102 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10202,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/2/qemu.pid -serial file:/root/vhost_test/vms/2/serial.log -D /root/vhost_test/vms/2/qemu.log -chardev file,path=/root/vhost_test/vms/2/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10200-:22,hostfwd=tcp::10201-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/2/muser/domain/muser2/2/cntrl
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/2/run.sh
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10200
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 10201
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@829 -- # echo 10202
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/2/migration_port
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@834 -- # echo 10204
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # echo 102
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 2'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@52 -- # vm_run 0 1 2
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@843 -- # local run_all=false
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@856 -- # false
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # shift 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/2/run.sh ]]
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@866 -- # vms_to_run+=' 2'
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:22.701   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 0
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:07:22.960  INFO: running /root/vhost_test/vms/0/run.sh
00:07:22.960   10:57:39 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:07:22.960  Running VM in /root/vhost_test/vms/0
00:07:23.897  Waiting for QEMU pid file
00:07:23.897  [2024-12-09 10:57:40.892898] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:07:24.834  === qemu.log ===
00:07:24.834  === qemu.log ===
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:07:24.834  INFO: running /root/vhost_test/vms/1/run.sh
00:07:24.834   10:57:41 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:07:24.834  Running VM in /root/vhost_test/vms/1
00:07:25.093  Waiting for QEMU pid file
00:07:25.352  [2024-12-09 10:57:42.221381] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:26.288  === qemu.log ===
00:07:26.288  === qemu.log ===
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@871 -- # vm_is_running 2
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/2/run.sh'
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/2/run.sh'
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/2/run.sh'
00:07:26.288  INFO: running /root/vhost_test/vms/2/run.sh
00:07:26.288   10:57:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/2/run.sh
00:07:26.288  Running VM in /root/vhost_test/vms/2
00:07:26.546  Waiting for QEMU pid file
00:07:26.805  [2024-12-09 10:57:43.626186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:07:27.740  === qemu.log ===
00:07:27.740  === qemu.log ===
00:07:27.740   10:57:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@53 -- # vm_wait_for_boot 60 0 1 2
00:07:27.741   10:57:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@913 -- # assert_number 60
00:07:27.741   10:57:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:07:27.741   10:57:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@281 -- # return 0
00:07:27.741   10:57:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@915 -- # xtrace_disable
00:07:27.741   10:57:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:27.741  INFO: Waiting for VMs to boot
00:07:27.741  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:07:42.629  [2024-12-09 10:57:56.993766] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:42.629  [2024-12-09 10:57:57.002804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:07:42.629  [2024-12-09 10:57:57.006834] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:07:46.821  [2024-12-09 10:58:03.099171] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:07:46.821  [2024-12-09 10:58:03.108233] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:07:46.821  [2024-12-09 10:58:03.112260] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller
00:07:46.821  [2024-12-09 10:58:03.248550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:07:46.821  [2024-12-09 10:58:03.257586] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:07:46.821  [2024-12-09 10:58:03.261611] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller
00:08:01.704  
00:08:01.704  INFO: VM0 ready
00:08:01.704  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:01.963  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:02.899  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:08:03.467  
00:08:03.467  INFO: VM1 ready
00:08:03.467  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:03.725  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:05.103  INFO: waiting for VM2 (/root/vhost_test/vms/2)
00:08:05.362  
00:08:05.362  INFO: VM2 ready
00:08:05.620  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:05.620  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:06.998  INFO: all VMs ready
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # return 0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@55 -- # timing_exit launch_vms
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@57 -- # timing_enter run_vm_cmd
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@59 -- # fio_disks=
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_0_qemu_mask
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-0-4-5
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 0 'hostname VM-0-4-5'
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-4-5'
00:08:06.998  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM0'
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0'
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0'
00:08:06.998  INFO: Starting fio server on VM0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio'
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:06.998   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:06.998    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:06.999    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:06.999    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:06.999    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:06.999    10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:06.999   10:58:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:06.999  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:07.566  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:07.566   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 0 'grep -l SPDK /sys/class/nvme/*/model'
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:07.566     10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:07.566     10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:07.566     10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:07.566     10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:07.566     10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:07.566     10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:07.566    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:07.825  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=0:/dev/nvme0n1'
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_1_qemu_mask
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-1-6-7
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 1 'hostname VM-1-6-7'
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:07.825   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:07.825    10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:07.826   10:58:24 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:08:08.084  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:08:08.084  INFO: Starting fio server on VM1
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:08.084    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:08.084    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:08.084    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.084    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.084    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:08.084    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:08.084   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:08.084  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:08.651  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:08.651   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 1
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:08.651    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:08.652    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.652    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.652    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:08.652    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:08.652     10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:08.652     10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:08.652     10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:08.652     10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.652     10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:08.652     10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:08.652    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:08.652  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=1:/dev/nvme0n1'
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_2_qemu_mask
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-2-8-9
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 2 'hostname VM-2-8-9'
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:08.910    10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:08.910   10:58:25 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'hostname VM-2-8-9'
00:08:08.910  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 2
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@978 -- # local readonly=
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@979 -- # local fio_bin=
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@993 -- # shift 1
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM2'
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM2'
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM2'
00:08:09.169  INFO: Starting fio server on VM2
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@997 -- # vm_exec 2 'cat > /root/fio; chmod +x /root/fio'
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:09.169    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:09.169    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:09.169    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:09.169    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.169    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:09.169    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:09.169   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:08:09.169  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@998 -- # vm_exec 2 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:09.428    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:09.428    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:09.428    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:09.428    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.428    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:09.428    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:09.428   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:08:09.428  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:09.686   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 2
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # vm_exec 2 'grep -l SPDK /sys/class/nvme/*/model'
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # awk -F/ '{print $5"n1"}'
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:09.686     10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:09.686     10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:09.686     10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:09.686     10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.686     10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:09.686     10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:09.686    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model'
00:08:09.686  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1045 -- # SCSI_DISK=nvme0n1
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # [[ -z nvme0n1 ]]
00:08:09.944    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=2:/dev/nvme0n1'
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@72 -- # job_file=default_integrity.job
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@73 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/nvme0n1 --vm=1:/dev/nvme0n1 --vm=2:/dev/nvme0n1
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1053 -- # local arg
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1054 -- # local job_file=
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # vms=()
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # local vms
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1057 -- # local out=
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1058 -- # local vm
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:08:09.944   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local job_fname
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=0
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 0 'cat > /root/default_integrity.job'
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:09.945    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:09.945   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job'
00:08:09.945  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 0 cat /root/default_integrity.job
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:10.203   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:10.203    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:10.203    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:10.203    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:10.204    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.204    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:10.204    10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:10.204   10:58:26 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job
00:08:10.204  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:10.204  [global]
00:08:10.204  blocksize_range=4k-512k
00:08:10.204  iodepth=512
00:08:10.204  iodepth_batch=128
00:08:10.204  iodepth_low=256
00:08:10.204  ioengine=libaio
00:08:10.204  size=1G
00:08:10.204  io_size=4G
00:08:10.204  filename=/dev/nvme0n1
00:08:10.204  group_reporting
00:08:10.204  thread
00:08:10.204  numjobs=1
00:08:10.204  direct=1
00:08:10.204  rw=randwrite
00:08:10.204  do_verify=1
00:08:10.204  verify=md5
00:08:10.204  verify_backlog=1024
00:08:10.204  fsync_on_close=1
00:08:10.204  verify_state_save=0
00:08:10.204  [nvme-host]
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 0
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 0
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/0
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/0/fio_socket
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job '
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:10.204    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:10.204   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:08:10.462  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:10.462    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:10.462    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:10.462    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.462    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.462    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:10.462    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:10.462   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:08:10.721  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:10.721  [global]
00:08:10.721  blocksize_range=4k-512k
00:08:10.721  iodepth=512
00:08:10.721  iodepth_batch=128
00:08:10.721  iodepth_low=256
00:08:10.721  ioengine=libaio
00:08:10.721  size=1G
00:08:10.721  io_size=4G
00:08:10.721  filename=/dev/nvme0n1
00:08:10.721  group_reporting
00:08:10.721  thread
00:08:10.721  numjobs=1
00:08:10.721  direct=1
00:08:10.721  rw=randwrite
00:08:10.721  do_verify=1
00:08:10.721  verify=md5
00:08:10.721  verify_backlog=1024
00:08:10.721  fsync_on_close=1
00:08:10.721  verify_state_save=0
00:08:10.721  [nvme-host]
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1115 -- # local vm_num=2
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1116 -- # local vmdisks=/dev/nvme0n1
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=2)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1119 -- # vm_exec 2 'cat > /root/default_integrity.job'
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:10.721    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:10.721   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/default_integrity.job'
00:08:10.721  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # false
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1125 -- # vm_exec 2 cat /root/default_integrity.job
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:10.980   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:10.980    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:10.980    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:10.980    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:10.981    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:10.981    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:10.981    10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:10.981   10:58:27 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 cat /root/default_integrity.job
00:08:10.981  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:11.238  [global]
00:08:11.238  blocksize_range=4k-512k
00:08:11.238  iodepth=512
00:08:11.238  iodepth_batch=128
00:08:11.238  iodepth_low=256
00:08:11.238  ioengine=libaio
00:08:11.238  size=1G
00:08:11.238  io_size=4G
00:08:11.238  filename=/dev/nvme0n1
00:08:11.238  group_reporting
00:08:11.238  thread
00:08:11.238  numjobs=1
00:08:11.238  direct=1
00:08:11.238  rw=randwrite
00:08:11.238  do_verify=1
00:08:11.238  verify=md5
00:08:11.238  verify_backlog=1024
00:08:11.238  fsync_on_close=1
00:08:11.238  verify_state_save=0
00:08:11.238  [nvme-host]
00:08:11.238   10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1127 -- # true
00:08:11.238    10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # vm_fio_socket 2
00:08:11.238    10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@326 -- # vm_num_is_valid 2
00:08:11.238    10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:11.238    10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:11.239    10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/2
00:08:11.239    10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/2/fio_socket
00:08:11.239   10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10201 --remote-config /root/default_integrity.job '
00:08:11.239   10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1131 -- # true
00:08:11.239   10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1147 -- # true
00:08:11.239   10:58:28 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job --client=127.0.0.1,10101 --remote-config /root/default_integrity.job --client=127.0.0.1,10201 --remote-config /root/default_integrity.job
00:08:26.119   10:58:42 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1162 -- # sleep 1
00:08:27.056   10:58:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:08:27.056   10:58:43 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:08:27.056   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:08:27.056  hostname=VM-2-8-9, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:27.056  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:27.056  hostname=VM-0-4-5, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:08:27.056  <VM-2-8-9> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:27.056  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:27.056  <VM-0-4-5> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:08:27.056  <VM-2-8-9> Starting 1 thread
00:08:27.056  <VM-0-4-5> Starting 1 thread
00:08:27.056  <VM-1-6-7> Starting 1 thread
00:08:27.056  <VM-2-8-9> 
00:08:27.056  nvme-host: (groupid=0, jobs=1): err= 0: pid=950: Mon Dec  9 10:58:40 2024
00:08:27.056    read: IOPS=1121, BW=188MiB/s (197MB/s)(2048MiB/10885msec)
00:08:27.056      slat (usec): min=45, max=17426, avg=7925.59, stdev=4134.24
00:08:27.056      clat (msec): min=4, max=491, avg=159.85, stdev=88.84
00:08:27.056       lat (msec): min=8, max=501, avg=167.77, stdev=89.21
00:08:27.056      clat percentiles (msec):
00:08:27.056       |  1.00th=[    8],  5.00th=[   22], 10.00th=[   53], 20.00th=[   85],
00:08:27.056       | 30.00th=[  107], 40.00th=[  130], 50.00th=[  150], 60.00th=[  174],
00:08:27.056       | 70.00th=[  201], 80.00th=[  232], 90.00th=[  279], 95.00th=[  317],
00:08:27.056       | 99.00th=[  418], 99.50th=[  456], 99.90th=[  485], 99.95th=[  489],
00:08:27.056       | 99.99th=[  493]
00:08:27.056    write: IOPS=1193, BW=200MiB/s (210MB/s)(2048MiB/10227msec); 0 zone resets
00:08:27.056      slat (usec): min=282, max=90396, avg=27111.55, stdev=17428.59
00:08:27.056      clat (msec): min=6, max=382, avg=138.01, stdev=75.77
00:08:27.056       lat (msec): min=7, max=458, avg=165.12, stdev=80.50
00:08:27.056      clat percentiles (msec):
00:08:27.056       |  1.00th=[   13],  5.00th=[   27], 10.00th=[   38], 20.00th=[   73],
00:08:27.056       | 30.00th=[   94], 40.00th=[  113], 50.00th=[  126], 60.00th=[  148],
00:08:27.056       | 70.00th=[  171], 80.00th=[  201], 90.00th=[  239], 95.00th=[  284],
00:08:27.056       | 99.00th=[  384], 99.50th=[  384], 99.90th=[  384], 99.95th=[  384],
00:08:27.056       | 99.99th=[  384]
00:08:27.056     bw (  KiB/s): min= 6552, max=407248, per=100.00%, avg=226744.00, stdev=125361.42, samples=18
00:08:27.056     iops        : min=   32, max= 2048, avg=1304.00, stdev=705.05, samples=18
00:08:27.056    lat (msec)   : 10=1.51%, 20=2.11%, 50=7.92%, 100=19.30%, 250=56.65%
00:08:27.056    lat (msec)   : 500=12.51%
00:08:27.056    cpu          : usr=85.84%, sys=2.28%, ctx=1071, majf=0, minf=34
00:08:27.056    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:08:27.056       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:08:27.056       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:08:27.056       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:27.056       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:27.056  
00:08:27.056  Run status group 0 (all jobs):
00:08:27.056     READ: bw=188MiB/s (197MB/s), 188MiB/s-188MiB/s (197MB/s-197MB/s), io=2048MiB (2147MB), run=10885-10885msec
00:08:27.056    WRITE: bw=200MiB/s (210MB/s), 200MiB/s-200MiB/s (210MB/s-210MB/s), io=2048MiB (2147MB), run=10227-10227msec
00:08:27.056  
00:08:27.056  Disk stats (read/write):
00:08:27.056    nvme0n1: ios=5/0, merge=0/0, ticks=1/0, in_queue=1, util=22.82%
00:08:27.056  <VM-1-6-7> 
00:08:27.056  nvme-host: (groupid=0, jobs=1): err= 0: pid=953: Mon Dec  9 10:58:42 2024
00:08:27.056    read: IOPS=840, BW=164MiB/s (172MB/s)(2072MiB/12661msec)
00:08:27.056      slat (usec): min=27, max=32162, avg=11581.63, stdev=7887.62
00:08:27.056      clat (usec): min=1523, max=66835, avg=26634.40, stdev=14927.72
00:08:27.056       lat (usec): min=1778, max=70891, avg=38216.03, stdev=13466.43
00:08:27.056      clat percentiles (usec):
00:08:27.056       |  1.00th=[ 1680],  5.00th=[ 2409], 10.00th=[ 8717], 20.00th=[12780],
00:08:27.056       | 30.00th=[15664], 40.00th=[21365], 50.00th=[27657], 60.00th=[30802],
00:08:27.056       | 70.00th=[33424], 80.00th=[41157], 90.00th=[45351], 95.00th=[53216],
00:08:27.056       | 99.00th=[66847], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847],
00:08:27.056       | 99.99th=[66847]
00:08:27.056    write: IOPS=1778, BW=346MiB/s (363MB/s)(2072MiB/5980msec); 0 zone resets
00:08:27.056      slat (usec): min=257, max=107812, avg=31079.67, stdev=20096.03
00:08:27.056      clat (usec): min=1135, max=257228, avg=73763.51, stdev=55382.85
00:08:27.056       lat (msec): min=2, max=261, avg=104.84, stdev=62.30
00:08:27.056      clat percentiles (msec):
00:08:27.056       |  1.00th=[    4],  5.00th=[    8], 10.00th=[   11], 20.00th=[   16],
00:08:27.056       | 30.00th=[   28], 40.00th=[   49], 50.00th=[   68], 60.00th=[   85],
00:08:27.056       | 70.00th=[  110], 80.00th=[  127], 90.00th=[  159], 95.00th=[  174],
00:08:27.056       | 99.00th=[  201], 99.50th=[  201], 99.90th=[  230], 99.95th=[  230],
00:08:27.056       | 99.99th=[  257]
00:08:27.056     bw (  KiB/s): min=156517, max=314288, per=47.98%, avg=170213.21, stdev=44374.67, samples=24
00:08:27.056     iops        : min=  784, max= 1576, avg=853.50, stdev=222.53, samples=24
00:08:27.056    lat (msec)   : 2=1.57%, 4=2.65%, 10=5.65%, 20=22.33%, 50=36.53%
00:08:27.056    lat (msec)   : 100=14.94%, 250=16.32%, 500=0.02%
00:08:27.056    cpu          : usr=84.21%, sys=1.67%, ctx=1176, majf=0, minf=16
00:08:27.056    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:08:27.056       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:08:27.056       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:08:27.056       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:27.056       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:27.056  
00:08:27.056  Run status group 0 (all jobs):
00:08:27.056     READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=2072MiB (2172MB), run=12661-12661msec
00:08:27.056    WRITE: bw=346MiB/s (363MB/s), 346MiB/s-346MiB/s (363MB/s-363MB/s), io=2072MiB (2172MB), run=5980-5980msec
00:08:27.056  
00:08:27.056  Disk stats (read/write):
00:08:27.056    nvme0n1: ios=80/0, merge=0/0, ticks=16/0, in_queue=16, util=34.07%
00:08:27.056  <VM-0-4-5> 
00:08:27.056  nvme-host: (groupid=0, jobs=1): err= 0: pid=949: Mon Dec  9 10:58:42 2024
00:08:27.056    read: IOPS=802, BW=156MiB/s (164MB/s)(2072MiB/13251msec)
00:08:27.056      slat (usec): min=36, max=31800, avg=10616.69, stdev=7804.41
00:08:27.056      clat (usec): min=1209, max=58422, avg=22981.86, stdev=12527.89
00:08:27.056       lat (usec): min=2035, max=63791, avg=33598.55, stdev=12214.93
00:08:27.056      clat percentiles (usec):
00:08:27.056       |  1.00th=[ 1762],  5.00th=[ 5211], 10.00th=[ 8094], 20.00th=[11863],
00:08:27.056       | 30.00th=[13698], 40.00th=[15270], 50.00th=[21103], 60.00th=[27132],
00:08:27.056       | 70.00th=[31327], 80.00th=[35914], 90.00th=[41681], 95.00th=[44303],
00:08:27.056       | 99.00th=[48497], 99.50th=[48497], 99.90th=[58459], 99.95th=[58459],
00:08:27.056       | 99.99th=[58459]
00:08:27.056    write: IOPS=1628, BW=317MiB/s (333MB/s)(2072MiB/6531msec); 0 zone resets
00:08:27.056      slat (usec): min=322, max=121165, avg=34422.60, stdev=22296.32
00:08:27.057      clat (usec): min=1333, max=264551, avg=80295.42, stdev=60085.02
00:08:27.057       lat (msec): min=4, max=276, avg=114.72, stdev=67.60
00:08:27.057      clat percentiles (msec):
00:08:27.057       |  1.00th=[    5],  5.00th=[    8], 10.00th=[   12], 20.00th=[   17],
00:08:27.057       | 30.00th=[   28], 40.00th=[   53], 50.00th=[   72], 60.00th=[   87],
00:08:27.057       | 70.00th=[  120], 80.00th=[  146], 90.00th=[  157], 95.00th=[  186],
00:08:27.057       | 99.00th=[  228], 99.50th=[  245], 99.90th=[  266], 99.95th=[  266],
00:08:27.057       | 99.99th=[  266]
00:08:27.057     bw (  KiB/s): min=156830, max=157144, per=48.37%, avg=157131.92, stdev=61.58, samples=26
00:08:27.057     iops        : min=  786, max=  788, avg=787.92, stdev= 0.39, samples=26
00:08:27.057    lat (msec)   : 2=1.04%, 4=0.80%, 10=8.55%, 20=24.34%, 50=34.98%
00:08:27.057    lat (msec)   : 100=12.75%, 250=17.32%, 500=0.23%
00:08:27.057    cpu          : usr=84.24%, sys=1.71%, ctx=1101, majf=0, minf=16
00:08:27.057    IO depths    : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8%
00:08:27.057       submit    : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0%
00:08:27.057       complete  : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0%
00:08:27.057       issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:27.057       latency   : target=0, window=0, percentile=100.00%, depth=512
00:08:27.057  
00:08:27.057  Run status group 0 (all jobs):
00:08:27.057     READ: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=2072MiB (2172MB), run=13251-13251msec
00:08:27.057    WRITE: bw=317MiB/s (333MB/s), 317MiB/s-317MiB/s (333MB/s-333MB/s), io=2072MiB (2172MB), run=6531-6531msec
00:08:27.057  
00:08:27.057  Disk stats (read/write):
00:08:27.057    nvme0n1: ios=80/0, merge=0/0, ticks=32/0, in_queue=32, util=28.70%
00:08:27.057  All clients: (groupid=0, jobs=3): err= 0: pid=0: Mon Dec  9 10:58:42 2024
00:08:27.057    read: IOPS=2526, BW=467Mi (490M)(6191MiB/13251msec)
00:08:27.057      slat (usec): min=27, max=32162, avg=9942.10, stdev=6915.69
00:08:27.057      clat (usec): min=1209, max=491671, avg=74042.31, stdev=85000.79
00:08:27.057       lat (usec): min=1778, max=501494, avg=83984.41, stdev=83892.55
00:08:27.057    write: IOPS=3274, BW=605Mi (635M)(6191MiB/10227msec); 0 zone resets
00:08:27.057      slat (usec): min=257, max=121165, avg=30694.99, stdev=20152.17
00:08:27.057      clat (usec): min=1135, max=382868, avg=99262.75, stdev=71292.39
00:08:27.057       lat (msec): min=2, max=458, avg=129.96, stdev=75.98
00:08:27.057     bw (  KiB/s): min=319899, max=878680, per=62.63%, avg=554089.13, stdev=68289.65, samples=68
00:08:27.057     iops        : min= 1602, max= 4412, avg=2945.42, stdev=378.32, samples=68
00:08:27.057    lat (msec)   : 2=0.83%, 4=1.10%, 10=5.06%, 20=15.59%, 50=25.60%
00:08:27.057    lat (msec)   : 100=15.83%, 250=31.34%, 500=4.64%
00:08:27.057    cpu          : usr=84.70%, sys=1.86%, ctx=3348, majf=0, minf=66
00:08:27.057    IO depths    : 1=0.0%, 2=0.4%, 4=0.8%, 8=1.1%, 16=2.3%, 32=5.2%, >=64=90.0%
00:08:27.057       submit    : 0=0.0%, 4=1.2%, 8=1.6%, 16=2.1%, 32=4.1%, 64=14.4%, >=64=76.6%
00:08:27.057       complete  : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5%
00:08:27.057       issued rwts: total=33484,33484,0,0 short=0,0,0,0 dropped=0,0,0,0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@75 -- # timing_exit run_vm_cmd
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@77 -- # vm_shutdown_all
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # vm_list_all
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/0 ]]
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131133
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131133
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0'
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0'
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0'
00:08:27.057  INFO: Shutting down virtual machine /root/vhost_test/vms/0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=0
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:08:27.057    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:08:27.057   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:27.317  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM0 is shutting down - wait a while to complete'
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete'
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete'
00:08:27.317  INFO: VM0 is shutting down - wait a while to complete
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131373
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131373
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:08:27.317  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=1
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:08:27.317    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:08:27.317   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:27.577  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:08:27.577  INFO: VM1 is shutting down - wait a while to complete
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@492 -- # vm_shutdown 2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_num_is_valid 2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/2 ]]
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # vm_is_running 2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131606
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131606
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/2'
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/2'
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/2'
00:08:27.577  INFO: Shutting down virtual machine /root/vhost_test/vms/2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@432 -- # set +e
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@433 -- # vm_exec 2 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@336 -- # vm_num_is_valid 2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@338 -- # local vm_num=2
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@339 -- # shift
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # vm_ssh_socket 2
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2
00:08:27.577    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/ssh_socket
00:08:27.577   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:08:27.836  Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts.
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@434 -- # notice 'VM2 is shutting down - wait a while to complete'
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM2 is shutting down - wait a while to complete'
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.836   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM2 is shutting down - wait a while to complete'
00:08:27.836  INFO: VM2 is shutting down - wait a while to complete
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # set -e
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:08:27.837  INFO: Waiting for VMs to shutdown...
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:27.837    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/0/qemu.pid
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131133
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131133
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:27.837    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131373
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131373
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:27.837    10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131606
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131606
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:27.837   10:58:44 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:28.773  [2024-12-09 10:58:45.458037] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller
00:08:28.773  [2024-12-09 10:58:45.758000] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 3 > 0 ))
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:29.032    10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131373
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131373
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@376 -- # local vm_pid
00:08:29.032    10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/2/qemu.pid
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@377 -- # vm_pid=131606
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@379 -- # /bin/kill -0 131606
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@380 -- # return 0
00:08:29.032   10:58:45 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:29.033  [2024-12-09 10:58:45.875413] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 2 > 0 ))
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # vm_is_running 2
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # vm_num_is_valid 2
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/2
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 1
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:08:29.976   10:58:46 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@500 -- # sleep 1
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:08:30.913  INFO: All VMs successfully shut down
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@505 -- # return 0
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@79 -- # timing_enter clean_vfio_user
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:30.913    10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # seq 0 2
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/0/muser
00:08:30.913   10:58:47 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode0 -t vfiouser -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0
00:08:31.172   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode0
00:08:31.431   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:31.431   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc0
00:08:31.690   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:31.690   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:08:31.690   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:08:31.949   10:58:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:08:32.207   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:32.207   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc1
00:08:32.776   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no)
00:08:32.776   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/2/muser
00:08:32.776   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode2 -t vfiouser -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0
00:08:32.776   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode2
00:08:33.034   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no ))
00:08:33.034   10:58:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@86 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@92 -- # vhost_kill 0
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:08:34.938    10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:34.938    10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:08:34.938    10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:34.938    10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@220 -- # local vhost_pid
00:08:34.938    10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # vhost_pid=129606
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 129606) app'
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 129606) app'
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 129606) app'
00:08:34.938  INFO: killing vhost (PID 129606) app
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@224 -- # kill -INT 129606
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:08:34.938  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 129606
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@228 -- # echo .
00:08:34.938  .
00:08:34.938   10:58:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@229 -- # sleep 1
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i++ ))
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 129606
00:08:35.876  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (129606) - No such process
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@231 -- # break
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@234 -- # kill -0 129606
00:08:35.876  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (129606) - No such process
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@239 -- # kill -0 129606
00:08:35.876  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (129606) - No such process
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@245 -- # is_pid_child 129606
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1686 -- # local pid=129606 _pid
00:08:35.876    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1685 -- # jobs -pr
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1692 -- # return 1
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:08:35.876   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@261 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@93 -- # timing_exit clean_vfio_user
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@94 -- # vhosttestfini
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@1 -- # clean_vfio_user
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@6 -- # vm_kill_all
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@476 -- # local vm
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # vm_list_all
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # vms=()
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@466 -- # local vms
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@468 -- # (( 3 > 0 ))
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 1
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@478 -- # vm_kill 2
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@442 -- # vm_num_is_valid 2
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@309 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/2
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@446 -- # return 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@7 -- # vhost_kill 0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@215 -- # warning 'no vhost pid file found'
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found'
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out=
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=WARN
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found'
00:08:35.877  WARN: no vhost pid file found
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@216 -- # return 0
00:08:35.877  
00:08:35.877  real	1m20.229s
00:08:35.877  user	5m19.230s
00:08:35.877  sys	0m3.549s
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:08:35.877  ************************************
00:08:35.877  END TEST vfio_user_nvme_fio
00:08:35.877  ************************************
00:08:35.877   10:58:52 vfio_user_qemu -- vfio_user/vfio_user.sh@16 -- # run_test vfio_user_nvme_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:35.877   10:58:52 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:35.877   10:58:52 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:35.877   10:58:52 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:08:35.877  ************************************
00:08:35.877  START TEST vfio_user_nvme_restart_vm
00:08:35.877  ************************************
00:08:35.877   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:35.877  * Looking for test storage...
00:08:35.877  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@345 -- # : 1
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=1
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 1
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=2
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:35.877     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 2
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # return 0
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:35.877  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.877  		--rc genhtml_branch_coverage=1
00:08:35.877  		--rc genhtml_function_coverage=1
00:08:35.877  		--rc genhtml_legend=1
00:08:35.877  		--rc geninfo_all_blocks=1
00:08:35.877  		--rc geninfo_unexecuted_blocks=1
00:08:35.877  		
00:08:35.877  		'
00:08:35.877    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:35.877  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.877  		--rc genhtml_branch_coverage=1
00:08:35.877  		--rc genhtml_function_coverage=1
00:08:35.877  		--rc genhtml_legend=1
00:08:35.877  		--rc geninfo_all_blocks=1
00:08:35.877  		--rc geninfo_unexecuted_blocks=1
00:08:35.877  		
00:08:35.877  		'
00:08:35.878    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:35.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.878  		--rc genhtml_branch_coverage=1
00:08:35.878  		--rc genhtml_function_coverage=1
00:08:35.878  		--rc genhtml_legend=1
00:08:35.878  		--rc geninfo_all_blocks=1
00:08:35.878  		--rc geninfo_unexecuted_blocks=1
00:08:35.878  		
00:08:35.878  		'
00:08:35.878    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:35.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.878  		--rc genhtml_branch_coverage=1
00:08:35.878  		--rc genhtml_function_coverage=1
00:08:35.878  		--rc genhtml_legend=1
00:08:35.878  		--rc geninfo_all_blocks=1
00:08:35.878  		--rc geninfo_unexecuted_blocks=1
00:08:35.878  		
00:08:35.878  		'
00:08:35.878   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:08:35.878    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:08:35.878    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:08:35.878    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:08:35.878     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@6 -- # : false
00:08:35.878     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:08:35.878     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:35.878     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:08:35.878      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:08:35.878     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:08:36.138       10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:08:36.138      10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:08:36.138       10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:08:36.138        10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:08:36.138        10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:08:36.138        10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:08:36.138        10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:08:36.138       10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:08:36.138   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh
00:08:36.138   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:08:36.138   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # get_nvme_bdfs
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:36.138     10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # get_vhost_dir 0
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:36.138    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:36.139    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@16 -- # trap clean_vfio_user EXIT
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@18 -- # vhosttestinit
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@20 -- # vfio_user_run 0
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@11 -- # local vhost_name=0
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py
00:08:36.139    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # get_vhost_dir 0
00:08:36.139    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:08:36.139    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:08:36.139    10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@20 -- # timing_enter vfio_user_start
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@22 -- # nvmfpid=144378
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@23 -- # echo 144378
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@25 -- # echo 'Process pid: 144378'
00:08:36.139  Process pid: 144378
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@26 -- # echo 'waiting for app to run...'
00:08:36.139  waiting for app to run...
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@27 -- # waitforlisten 144378 /root/vhost_test/vhost/0/rpc.sock
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 144378 ']'
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:08:36.139  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:36.139   10:58:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:36.139  [2024-12-09 10:58:53.060383] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:08:36.139  [2024-12-09 10:58:53.060493] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144378 ]
00:08:36.139  EAL: No free 2048 kB hugepages reported on node 1
00:08:36.399  [2024-12-09 10:58:53.375259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:36.658  [2024-12-09 10:58:53.480127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:36.658  [2024-12-09 10:58:53.480199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:36.658  [2024-12-09 10:58:53.480240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:36.659  [2024-12-09 10:58:53.480261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:36.918   10:58:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:36.918   10:58:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:08:36.918   10:58:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@30 -- # timing_exit vfio_user_start
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@22 -- # vm_muser_dir=/root/vhost_test/vms/1/muser
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@23 -- # rm -rf /root/vhost_test/vms/1/muser
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@24 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1
00:08:37.177   10:58:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@26 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:08:40.470  Nvme0n1
00:08:40.470   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a
00:08:40.470   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Nvme0n1
00:08:40.729   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@31 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:40.989  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:08:40.989  INFO: Creating new VM in /root/vhost_test/vms/1
00:08:40.989  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:08:40.989  INFO: TASK MASK: 6-7
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:08:40.989  INFO: NUMA NODE: 0
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:08:40.989   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:08:40.990  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:08:40.990  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:08:40.990    10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@32 -- # vm_run 1
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:08:40.990  INFO: running /root/vhost_test/vms/1/run.sh
00:08:40.990   10:58:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:08:40.990  Running VM in /root/vhost_test/vms/1
00:08:41.250  Waiting for QEMU pid file
00:08:41.509  [2024-12-09 10:58:58.483165] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:08:42.446  === qemu.log ===
00:08:42.446  === qemu.log ===
00:08:42.446   10:58:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@33 -- # vm_wait_for_boot 60 1
00:08:42.446   10:58:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:08:42.446   10:58:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:08:42.446   10:58:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:08:42.446   10:58:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:08:42.446   10:58:59 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:08:42.446  INFO: Waiting for VMs to boot
00:08:42.446  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:08:57.330  [2024-12-09 10:59:12.471580] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:57.330  [2024-12-09 10:59:12.480596] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:08:57.330  [2024-12-09 10:59:12.484623] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:05.454  
00:09:05.454  INFO: VM1 ready
00:09:05.454  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:05.454  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:06.022  INFO: all VMs ready
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@35 -- # vm_exec 1 lsblk
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:06.022    10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:06.022    10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:06.022    10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.022    10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.022    10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:06.022    10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:06.022   10:59:22 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:09:06.022  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:06.282  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:09:06.282  sda       8:0    0     5G  0 disk 
00:09:06.282  ├─sda1    8:1    0     1M  0 part 
00:09:06.282  ├─sda2    8:2    0  1000M  0 part /boot
00:09:06.282  ├─sda3    8:3    0   100M  0 part /boot/efi
00:09:06.282  ├─sda4    8:4    0     4M  0 part 
00:09:06.282  └─sda5    8:5    0   3.9G  0 part /home
00:09:06.282                                    /
00:09:06.282  zram0   252:0    0   946M  0 disk [SWAP]
00:09:06.282  nvme0n1 259:1    0 931.5G  0 disk 
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@37 -- # vm_shutdown_all
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=145263
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 145263
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:09:06.282  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:06.282    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:06.282   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:06.282  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:09:06.542  INFO: VM1 is shutting down - wait a while to complete
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:09:06.542  INFO: Waiting for VMs to shutdown...
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:06.542    10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=145263
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 145263
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:06.542   10:59:23 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:07.478    10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=145263
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 145263
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:07.478   10:59:24 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:08.045  [2024-12-09 10:59:25.043512] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:08.612   10:59:25 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:09.548  INFO: All VMs successfully shut down
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@40 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:09.548  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:09:09.548  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:09.548  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:09.548  INFO: TASK MASK: 6-7
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:09.548   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:09.549  INFO: NUMA NODE: 0
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@758 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl'
00:09:09.549  INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@759 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl")
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@760 -- # [[ 1 == '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:09.549  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # cat
00:09:09.549    10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@41 -- # vm_run 1
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@856 -- # false
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:09.549  INFO: running /root/vhost_test/vms/1/run.sh
00:09:09.549   10:59:26 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:09.549  Running VM in /root/vhost_test/vms/1
00:09:09.808  Waiting for QEMU pid file
00:09:10.067  [2024-12-09 10:59:26.980964] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:11.003  === qemu.log ===
00:09:11.003  === qemu.log ===
00:09:11.003   10:59:27 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@42 -- # vm_wait_for_boot 60 1
00:09:11.003   10:59:27 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:11.003   10:59:27 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:11.003   10:59:27 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:11.003   10:59:27 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:11.003   10:59:27 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:11.003  INFO: Waiting for VMs to boot
00:09:11.003  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:09:25.883  [2024-12-09 10:59:41.054526] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:25.883  [2024-12-09 10:59:41.063553] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller
00:09:25.883  [2024-12-09 10:59:41.067579] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller
00:09:33.998  
00:09:33.998  INFO: VM1 ready
00:09:33.998  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:33.998  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:34.298  INFO: all VMs ready
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@973 -- # return 0
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@44 -- # vm_exec 1 lsblk
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:34.298    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:34.298    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:34.298    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:34.298    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:34.298    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:34.298    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:34.298   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk
00:09:34.298  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:34.556  NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
00:09:34.556  sda       8:0    0     5G  0 disk 
00:09:34.556  ├─sda1    8:1    0     1M  0 part 
00:09:34.556  ├─sda2    8:2    0  1000M  0 part /boot
00:09:34.556  ├─sda3    8:3    0   100M  0 part /boot/efi
00:09:34.556  ├─sda4    8:4    0     4M  0 part 
00:09:34.556  └─sda5    8:5    0   3.9G  0 part /home
00:09:34.556                                    /
00:09:34.556  zram0   252:0    0   946M  0 disk [SWAP]
00:09:34.556  nvme0n1 259:1    0 931.5G  0 disk 
00:09:34.556   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_ns nqn.2019-07.io.spdk:cnode1 1
00:09:34.815   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@53 -- # vm_exec 1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:35.074    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:35.074    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:35.074    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.074    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.074    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:35.074    10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:35.074   10:59:51 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'echo 1 > /sys/class/nvme/nvme0/device/remove'
00:09:35.074  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@55 -- # vm_shutdown_all
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=150450
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 150450
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:09:35.333  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@432 -- # set +e
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@339 -- # shift
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:09:35.333    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:09:35.333   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:09:35.333  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.592   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:09:35.593  INFO: VM1 is shutting down - wait a while to complete
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # set -e
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:09:35.593  INFO: Waiting for VMs to shutdown...
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:35.593    10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=150450
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 150450
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:35.593   10:59:52 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:09:36.530    10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@377 -- # vm_pid=150450
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 150450
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@380 -- # return 0
00:09:36.530   10:59:53 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:09:37.907   10:59:54 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:09:38.842  INFO: All VMs successfully shut down
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@505 -- # return 0
00:09:38.842   10:59:55 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:09:40.218   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@58 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@60 -- # vhosttestfini
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@1 -- # clean_vfio_user
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@6 -- # vm_kill_all
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@476 -- # local vm
00:09:40.477    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # vm_list_all
00:09:40.477    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # vms=()
00:09:40.477    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@466 -- # local vms
00:09:40.477    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:09:40.477    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:09:40.477    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@478 -- # vm_kill 1
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@446 -- # return 0
00:09:40.477   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@7 -- # vhost_kill 0
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:09:40.478    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:09:40.478    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:40.478    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:40.478    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:09:40.478    10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # vhost_pid=144378
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 144378) app'
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 144378) app'
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 144378) app'
00:09:40.478  INFO: killing vhost (PID 144378) app
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@224 -- # kill -INT 144378
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:09:40.478  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 144378
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@228 -- # echo .
00:09:40.478  .
00:09:40.478   10:59:57 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 144378
00:09:41.415  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (144378) - No such process
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@231 -- # break
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@234 -- # kill -0 144378
00:09:41.415  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (144378) - No such process
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@239 -- # kill -0 144378
00:09:41.415  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (144378) - No such process
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@245 -- # is_pid_child 144378
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1686 -- # local pid=144378 _pid
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:09:41.415    10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@261 -- # return 0
00:09:41.415  
00:09:41.415  real	1m5.631s
00:09:41.415  user	4m17.127s
00:09:41.415  sys	0m1.741s
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:41.415   10:59:58 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:41.415  ************************************
00:09:41.415  END TEST vfio_user_nvme_restart_vm
00:09:41.415  ************************************
00:09:41.415   10:59:58 vfio_user_qemu -- vfio_user/vfio_user.sh@17 -- # run_test vfio_user_virtio_blk_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:09:41.415   10:59:58 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:41.415   10:59:58 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:41.415   10:59:58 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:09:41.674  ************************************
00:09:41.674  START TEST vfio_user_virtio_blk_restart_vm
00:09:41.674  ************************************
00:09:41.674   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk
00:09:41.674  * Looking for test storage...
00:09:41.674  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@345 -- # : 1
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=1
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 1
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=2
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:41.674     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 2
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # return 0
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:41.674    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:41.674  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.674  		--rc genhtml_branch_coverage=1
00:09:41.675  		--rc genhtml_function_coverage=1
00:09:41.675  		--rc genhtml_legend=1
00:09:41.675  		--rc geninfo_all_blocks=1
00:09:41.675  		--rc geninfo_unexecuted_blocks=1
00:09:41.675  		
00:09:41.675  		'
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:41.675  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.675  		--rc genhtml_branch_coverage=1
00:09:41.675  		--rc genhtml_function_coverage=1
00:09:41.675  		--rc genhtml_legend=1
00:09:41.675  		--rc geninfo_all_blocks=1
00:09:41.675  		--rc geninfo_unexecuted_blocks=1
00:09:41.675  		
00:09:41.675  		'
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:41.675  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.675  		--rc genhtml_branch_coverage=1
00:09:41.675  		--rc genhtml_function_coverage=1
00:09:41.675  		--rc genhtml_legend=1
00:09:41.675  		--rc geninfo_all_blocks=1
00:09:41.675  		--rc geninfo_unexecuted_blocks=1
00:09:41.675  		
00:09:41.675  		'
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:41.675  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:41.675  		--rc genhtml_branch_coverage=1
00:09:41.675  		--rc genhtml_function_coverage=1
00:09:41.675  		--rc genhtml_legend=1
00:09:41.675  		--rc geninfo_all_blocks=1
00:09:41.675  		--rc geninfo_unexecuted_blocks=1
00:09:41.675  		
00:09:41.675  		'
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@6 -- # : false
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:09:41.675       10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:09:41.675      10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:09:41.675       10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:09:41.675        10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:09:41.675        10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:09:41.675        10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:09:41.675        10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:09:41.675       10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:41.675     10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_blk
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_blk != virtio_blk ]]
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:09:41.675    10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@17 -- # vfupid=156177
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@18 -- # echo 156177
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 156177'
00:09:41.675  Process pid: 156177
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:09:41.675  waiting for app to run...
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@22 -- # waitforlisten 156177 /root/vhost_test/vhost/0/rpc.sock
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 156177 ']'
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:41.675   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:09:41.676  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:09:41.676   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:41.676   10:59:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:41.935  [2024-12-09 10:59:58.753476] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:09:41.935  [2024-12-09 10:59:58.753600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156177 ]
00:09:41.935  EAL: No free 2048 kB hugepages reported on node 1
00:09:42.194  [2024-12-09 10:59:59.014009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:42.194  [2024-12-09 10:59:59.110144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:42.194  [2024-12-09 10:59:59.110226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:42.194  [2024-12-09 10:59:59.110264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.194  [2024-12-09 10:59:59.110284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:09:43.131   10:59:59 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:09:46.418  Nvme0n1
00:09:46.418   11:00:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:09:46.418   11:00:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:09:46.418   11:00:02 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_blk_endpoint virtio.1 --bdev-name Nvme0n1 --num-queues=2 --qsize=512 --packed-ring
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:46.418  INFO: Creating new VM in /root/vhost_test/vms/1
00:09:46.418  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:09:46.418  INFO: TASK MASK: 6-7
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:09:46.418  INFO: NUMA NODE: 0
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:09:46.418  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:09:46.418  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:09:46.418   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:09:46.419    11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:09:46.419  INFO: running /root/vhost_test/vms/1/run.sh
00:09:46.419   11:00:03 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:09:46.419  Running VM in /root/vhost_test/vms/1
00:09:46.987  [2024-12-09 11:00:03.737746] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:09:46.987  Waiting for QEMU pid file
00:09:47.924  === qemu.log ===
00:09:47.924  === qemu.log ===
00:09:47.924   11:00:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:09:47.924   11:00:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:09:47.924   11:00:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:09:47.924   11:00:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:09:47.924   11:00:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:09:47.924   11:00:04 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:09:47.924  INFO: Waiting for VMs to boot
00:09:47.924  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:10:14.471  
00:10:14.471  INFO: VM1 ready
00:10:14.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.471  INFO: all VMs ready
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:14.471    11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:14.471    11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:14.471    11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.471    11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.471    11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.471    11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:14.471   11:00:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:10:14.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@993 -- # shift 1
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:10:14.471  INFO: Starting fio server on VM1
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:14.471    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:14.471    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:14.471    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.471    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.471    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.471    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:10:14.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.471   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:10:14.472  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_blk 1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:14.472     11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:14.472     11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:14.472     11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472     11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.472     11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.472     11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:10:14.472  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=vda
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s vda
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/vda'
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/vda
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1053 -- # local arg
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # local vms
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1057 -- # local out=
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1058 -- # local vm
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/vda
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/vda@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:10:14.472  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # false
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:14.472   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.472    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.473    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.473    11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:14.473   11:00:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:10:14.473  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:14.473  [global]
00:10:14.473  blocksize_range=4k-512k
00:10:14.473  iodepth=512
00:10:14.473  iodepth_batch=128
00:10:14.473  iodepth_low=256
00:10:14.473  ioengine=libaio
00:10:14.473  size=1G
00:10:14.473  io_size=4G
00:10:14.473  filename=/dev/vda
00:10:14.473  group_reporting
00:10:14.473  thread
00:10:14.473  numjobs=1
00:10:14.473  direct=1
00:10:14.473  rw=randwrite
00:10:14.473  do_verify=1
00:10:14.473  verify=md5
00:10:14.473  verify_backlog=1024
00:10:14.473  fsync_on_close=1
00:10:14.473  verify_state_save=0
00:10:14.473  [nvme-host]
00:10:14.473   11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1127 -- # true
00:10:14.473    11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:10:14.473    11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:10:14.473    11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:14.473    11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:14.473    11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:10:14.473    11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:10:14.473   11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:10:14.473   11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1131 -- # true
00:10:14.473   11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1147 -- # true
00:10:14.473   11:00:30 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:10:24.456   11:00:40 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:10:24.716  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:10:24.716  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:10:24.716  <VM-1-6-7> Starting 1 thread
00:10:24.716  <VM-1-6-7> 
00:10:24.716  nvme-host: (groupid=0, jobs=1): err= 0: pid=950: Mon Dec  9 11:00:40 2024
00:10:24.716    read: IOPS=1348, BW=226MiB/s (237MB/s)(2048MiB/9055msec)
00:10:24.716      slat (usec): min=46, max=18101, avg=2247.06, stdev=3538.98
00:10:24.716      clat (msec): min=6, max=381, avg=130.77, stdev=73.49
00:10:24.716       lat (msec): min=6, max=382, avg=133.02, stdev=73.13
00:10:24.716      clat percentiles (msec):
00:10:24.716       |  1.00th=[    9],  5.00th=[   19], 10.00th=[   43], 20.00th=[   71],
00:10:24.716       | 30.00th=[   85], 40.00th=[  104], 50.00th=[  122], 60.00th=[  140],
00:10:24.716       | 70.00th=[  165], 80.00th=[  192], 90.00th=[  230], 95.00th=[  271],
00:10:24.716       | 99.00th=[  330], 99.50th=[  359], 99.90th=[  376], 99.95th=[  380],
00:10:24.716       | 99.99th=[  380]
00:10:24.716    write: IOPS=1432, BW=240MiB/s (252MB/s)(2048MiB/8520msec); 0 zone resets
00:10:24.716      slat (usec): min=233, max=92454, avg=21239.03, stdev=15381.16
00:10:24.716      clat (msec): min=6, max=284, avg=117.17, stdev=64.27
00:10:24.716       lat (msec): min=8, max=339, avg=138.41, stdev=68.43
00:10:24.716      clat percentiles (msec):
00:10:24.716       |  1.00th=[    9],  5.00th=[   20], 10.00th=[   31], 20.00th=[   64],
00:10:24.716       | 30.00th=[   80], 40.00th=[   93], 50.00th=[  110], 60.00th=[  127],
00:10:24.716       | 70.00th=[  148], 80.00th=[  174], 90.00th=[  209], 95.00th=[  239],
00:10:24.716       | 99.00th=[  268], 99.50th=[  284], 99.90th=[  284], 99.95th=[  284],
00:10:24.716       | 99.99th=[  284]
00:10:24.716     bw (  KiB/s): min=21768, max=364920, per=94.67%, avg=233016.89, stdev=95638.94, samples=18
00:10:24.716     iops        : min=  134, max= 2048, avg=1356.44, stdev=648.27, samples=18
00:10:24.716    lat (msec)   : 10=1.83%, 20=4.01%, 50=7.74%, 100=27.51%, 250=53.65%
00:10:24.716    lat (msec)   : 500=5.26%
00:10:24.716    cpu          : usr=94.94%, sys=1.54%, ctx=534, majf=0, minf=34
00:10:24.716    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:10:24.716       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:10:24.716       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:10:24.716       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:24.716       latency   : target=0, window=0, percentile=100.00%, depth=512
00:10:24.716  
00:10:24.716  Run status group 0 (all jobs):
00:10:24.716     READ: bw=226MiB/s (237MB/s), 226MiB/s-226MiB/s (237MB/s-237MB/s), io=2048MiB (2147MB), run=9055-9055msec
00:10:24.716    WRITE: bw=240MiB/s (252MB/s), 240MiB/s-240MiB/s (252MB/s-252MB/s), io=2048MiB (2147MB), run=8520-8520msec
00:10:24.716  
00:10:24.716  Disk stats (read/write):
00:10:24.716    vda: ios=12114/12141, merge=51/72, ticks=136421/97947, in_queue=234369, util=28.67%
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:10:24.716  INFO: Shutting down virtual machine...
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:10:24.716   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:10:24.716    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:10:24.716    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=157161
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 157161
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:10:24.717  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:10:24.717    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:10:24.717   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:10:25.003  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:10:25.003  INFO: VM1 is shutting down - wait a while to complete
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:10:25.003  INFO: Waiting for VMs to shutdown...
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:25.003    11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=157161
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 157161
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:25.003   11:00:41 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:10:26.381    11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=157161
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 157161
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:10:26.381   11:00:42 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:10:27.317   11:00:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:10:28.261  INFO: All VMs successfully shut down
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:10:28.261   11:00:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:28.261  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:10:28.261  INFO: Creating new VM in /root/vhost_test/vms/1
00:10:28.261  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:10:28.261  INFO: TASK MASK: 6-7
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:10:28.261  INFO: NUMA NODE: 0
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:10:28.261  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:10:28.261  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # cat
00:10:28.261    11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # echo 101
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@856 -- # false
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # shift 0
00:10:28.261   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:10:28.262  INFO: running /root/vhost_test/vms/1/run.sh
00:10:28.262   11:00:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:10:28.262  Running VM in /root/vhost_test/vms/1
00:10:28.520  [2024-12-09 11:00:45.458222] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:10:28.778  Waiting for QEMU pid file
00:10:29.714  === qemu.log ===
00:10:29.714  === qemu.log ===
00:10:29.714   11:00:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:10:29.714   11:00:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:10:29.714   11:00:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:10:29.714   11:00:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@281 -- # return 0
00:10:29.714   11:00:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:10:29.714   11:00:46 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:10:29.714  INFO: Waiting for VMs to boot
00:10:29.714  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:08.441  
00:11:08.442  INFO: VM1 ready
00:11:08.442  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:08.442  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:08.442  INFO: all VMs ready
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # return 0
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_blk 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1035 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*'
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*'
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # vm_exec 1 bash -s
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:11:08.442     11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:08.442     11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:08.442     11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.442     11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.442     11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.442     11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:11:08.442  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1036 -- # SCSI_DISK=vda
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1038 -- # [[ -z vda ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=vda
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ vda != \v\d\a ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:11:08.442  INFO: Shutting down virtual machine...
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # vms=()
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@466 -- # local vms
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=165108
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 165108
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:11:08.442  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@432 -- # set +e
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@339 -- # shift
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.442    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:11:08.442  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:11:08.442  INFO: VM1 is shutting down - wait a while to complete
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@435 -- # set -e
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:08.442   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:11:08.443  INFO: Waiting for VMs to shutdown...
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:08.443    11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=165108
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 165108
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:08.443   11:01:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:11:08.443    11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@377 -- # vm_pid=165108
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 165108
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@380 -- # return 0
00:11:08.443   11:01:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:11:08.443   11:01:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:11:08.701  INFO: All VMs successfully shut down
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@505 -- # return 0
00:11:08.701   11:01:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:11:08.959  [2024-12-09 11:01:25.903546] vfu_virtio_blk.c: 384:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:11:10.336    11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:11:10.336    11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:10.336    11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:10.336    11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:11:10.336    11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # vhost_pid=156177
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 156177) app'
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 156177) app'
00:11:10.336   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 156177) app'
00:11:10.337  INFO: killing vhost (PID 156177) app
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@224 -- # kill -INT 156177
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:11:10.337  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 156177
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:11:10.337  .
00:11:10.337   11:01:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:11:11.714   11:01:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:11:11.714   11:01:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:11.714   11:01:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 156177
00:11:11.714   11:01:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo .
00:11:11.714  .
00:11:11.714   11:01:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 156177
00:11:12.652  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (156177) - No such process
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@231 -- # break
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@234 -- # kill -0 156177
00:11:12.652  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (156177) - No such process
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@239 -- # kill -0 156177
00:11:12.652  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (156177) - No such process
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@245 -- # is_pid_child 156177
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1686 -- # local pid=156177 _pid
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@261 -- # return 0
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:11:12.652  
00:11:12.652  real	1m30.899s
00:11:12.652  user	5m57.874s
00:11:12.652  sys	0m1.950s
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:12.652  ************************************
00:11:12.652  END TEST vfio_user_virtio_blk_restart_vm
00:11:12.652  ************************************
00:11:12.652   11:01:29 vfio_user_qemu -- vfio_user/vfio_user.sh@18 -- # run_test vfio_user_virtio_scsi_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:11:12.652   11:01:29 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:12.652   11:01:29 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:12.652   11:01:29 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:11:12.652  ************************************
00:11:12.652  START TEST vfio_user_virtio_scsi_restart_vm
00:11:12.652  ************************************
00:11:12.652   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi
00:11:12.652  * Looking for test storage...
00:11:12.652  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # lcov --version
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # IFS=.-:
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # read -ra ver1
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # IFS=.-:
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # read -ra ver2
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@338 -- # local 'op=<'
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@340 -- # ver1_l=2
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@341 -- # ver2_l=1
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@344 -- # case "$op" in
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@345 -- # : 1
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # decimal 1
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=1
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 1
00:11:12.652    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # decimal 2
00:11:12.652     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=2
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 2
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # return 0
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:12.653  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:12.653  		--rc genhtml_branch_coverage=1
00:11:12.653  		--rc genhtml_function_coverage=1
00:11:12.653  		--rc genhtml_legend=1
00:11:12.653  		--rc geninfo_all_blocks=1
00:11:12.653  		--rc geninfo_unexecuted_blocks=1
00:11:12.653  		
00:11:12.653  		'
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:12.653  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:12.653  		--rc genhtml_branch_coverage=1
00:11:12.653  		--rc genhtml_function_coverage=1
00:11:12.653  		--rc genhtml_legend=1
00:11:12.653  		--rc geninfo_all_blocks=1
00:11:12.653  		--rc geninfo_unexecuted_blocks=1
00:11:12.653  		
00:11:12.653  		'
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:12.653  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:12.653  		--rc genhtml_branch_coverage=1
00:11:12.653  		--rc genhtml_function_coverage=1
00:11:12.653  		--rc genhtml_legend=1
00:11:12.653  		--rc geninfo_all_blocks=1
00:11:12.653  		--rc geninfo_unexecuted_blocks=1
00:11:12.653  		
00:11:12.653  		'
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:12.653  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:12.653  		--rc genhtml_branch_coverage=1
00:11:12.653  		--rc genhtml_function_coverage=1
00:11:12.653  		--rc genhtml_legend=1
00:11:12.653  		--rc geninfo_all_blocks=1
00:11:12.653  		--rc geninfo_unexecuted_blocks=1
00:11:12.653  		
00:11:12.653  		'
00:11:12.653   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@6 -- # : 128
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@7 -- # : 512
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@6 -- # : false
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@9 -- # : qemu-img
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:11:12.653       11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:11:12.653     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:11:12.653      11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:11:12.653       11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:11:12.653        11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup
00:11:12.653        11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:11:12.653        11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:11:12.653        11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2
00:11:12.653       11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:11:12.653   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:11:12.653   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:11:12.653    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs))
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # bdfs=()
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # local bdfs
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:12.654     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:11:12.654     11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0d:00.0
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_scsi
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_blk ]]
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_scsi ]]
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:11:12.654    11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@17 -- # vfupid=173038
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@18 -- # echo 173038
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 173038'
00:11:12.654  Process pid: 173038
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:11:12.654  waiting for app to run...
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@22 -- # waitforlisten 173038 /root/vhost_test/vhost/0/rpc.sock
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@835 -- # '[' -z 173038 ']'
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:11:12.654  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:12.654   11:01:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:12.913  [2024-12-09 11:01:29.711343] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:11:12.913  [2024-12-09 11:01:29.711470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173038 ]
00:11:12.913  EAL: No free 2048 kB hugepages reported on node 1
00:11:13.171  [2024-12-09 11:01:29.973173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:13.171  [2024-12-09 11:01:30.072297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:13.171  [2024-12-09 11:01:30.072348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:13.172  [2024-12-09 11:01:30.072381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:13.172  [2024-12-09 11:01:30.072403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@868 -- # return 0
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:11:14.108   11:01:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:0d:00.0
00:11:17.392  Nvme0n1
00:11:17.392   11:01:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1
00:11:17.392   11:01:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1
00:11:17.392   11:01:33 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:11:17.392   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\b\l\k ]]
00:11:17.392   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@48 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:17.392   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_scsi_endpoint virtio.1 --num-io-queues=2 --qsize=512 --packed-ring
00:11:17.392   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_scsi_add_target virtio.1 --scsi-target-num=0 --bdev-name Nvme0n1
00:11:17.652  [2024-12-09 11:01:34.485663] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: virtio.1: added SCSI target 0 using bdev 'Nvme0n1'
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:17.652  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:11:17.652  INFO: Creating new VM in /root/vhost_test/vms/1
00:11:17.652  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:11:17.652  INFO: TASK MASK: 6-7
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:11:17.652  INFO: NUMA NODE: 0
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:11:17.652   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:11:17.653  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:11:17.653  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:11:17.653    11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:11:17.653  INFO: running /root/vhost_test/vms/1/run.sh
00:11:17.653   11:01:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:11:17.653  Running VM in /root/vhost_test/vms/1
00:11:18.221  [2024-12-09 11:01:34.959985] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:11:18.221  Waiting for QEMU pid file
00:11:19.157  === qemu.log ===
00:11:19.157  === qemu.log ===
00:11:19.157   11:01:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1
00:11:19.157   11:01:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:11:19.157   11:01:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:11:19.157   11:01:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:11:19.157   11:01:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:11:19.157   11:01:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:11:19.157  INFO: Waiting for VMs to boot
00:11:19.157  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:11:34.037  [2024-12-09 11:01:49.867520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:11:55.970  
00:11:55.970  INFO: VM1 ready
00:11:55.970  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:55.970  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:56.537  INFO: all VMs ready
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks=
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7'
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:56.537    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:56.537    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:56.537    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.537    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:56.537    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:56.537    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:56.537   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7'
00:11:56.537  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@977 -- # local OPTIND optchar
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@978 -- # local readonly=
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@979 -- # local fio_bin=
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@981 -- # case "$optchar" in
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@983 -- # case "$OPTARG" in
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@980 -- # getopts :-: optchar
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@993 -- # shift 1
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@994 -- # for vm_num in "$@"
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:11:56.796  INFO: Starting fio server on VM1
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:56.796    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:56.796    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:56.796    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:56.796    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:56.796    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:56.796    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:56.796   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:11:56.796  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.054   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:57.054   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:57.054   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.054   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.054   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:57.054   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:57.054    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:57.055    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:57.055    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.055    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.055    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.055    11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:57.055   11:02:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:11:57.055  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart=
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_scsi 1
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:11:57.313  	for entry in /sys/block/sd*; do
00:11:57.313  		disk_type="$(cat $entry/device/vendor)";
00:11:57.313  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:57.313  			fname=$(basename $entry);
00:11:57.313  			echo -n " $fname";
00:11:57.313  		fi;
00:11:57.313  	done'
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:11:57.313  	for entry in /sys/block/sd*; do
00:11:57.313  		disk_type="$(cat $entry/device/vendor)";
00:11:57.313  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:11:57.313  			fname=$(basename $entry);
00:11:57.313  			echo -n " $fname";
00:11:57.313  		fi;
00:11:57.313  	done'
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:57.313     11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:57.313     11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:57.313     11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.313     11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.313     11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.313     11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:11:57.313  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=' sdb'
00:11:57.313    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s sdb
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/sdb'
00:11:57.313   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/sdb
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1053 -- # local arg
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1054 -- # local job_file=
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1055 -- # local fio_bin=
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # vms=()
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # local vms
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1057 -- # local out=
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1058 -- # local vm
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # local run_server_mode=true
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1061 -- # local fio_start_cmd
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # for arg in "$@"
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1067 -- # case "$arg" in
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]]
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1108 -- # local job_fname
00:11:57.573    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # job_fname=default_integrity.job
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1110 -- # log_fname=default_integrity.log
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal '
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1115 -- # local vm_num=1
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1116 -- # local vmdisks=/dev/sdb
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_integrity.job'
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:57.573   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:57.573    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:57.573    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:57.573    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.573    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job'
00:11:57.574  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # false
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_integrity.job
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.574    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:11:57.574   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job
00:11:57.833  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:11:57.833  [global]
00:11:57.833  blocksize_range=4k-512k
00:11:57.833  iodepth=512
00:11:57.833  iodepth_batch=128
00:11:57.833  iodepth_low=256
00:11:57.833  ioengine=libaio
00:11:57.833  size=1G
00:11:57.833  io_size=4G
00:11:57.833  filename=/dev/sdb
00:11:57.833  group_reporting
00:11:57.833  thread
00:11:57.833  numjobs=1
00:11:57.833  direct=1
00:11:57.833  rw=randwrite
00:11:57.833  do_verify=1
00:11:57.833  verify=md5
00:11:57.833  verify_backlog=1024
00:11:57.833  fsync_on_close=1
00:11:57.833  verify_state_save=0
00:11:57.833  [nvme-host]
00:11:57.833   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1127 -- # true
00:11:57.833    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:11:57.833    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:11:57.833    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:57.833    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:11:57.833    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:11:57.833    11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:11:57.833   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job '
00:11:57.833   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1131 -- # true
00:11:57.833   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1147 -- # true
00:11:57.833   11:02:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job
00:11:59.207  [2024-12-09 11:02:15.837191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:03.394  [2024-12-09 11:02:20.401903] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:03.962  [2024-12-09 11:02:20.678592] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:08.151  [2024-12-09 11:02:25.096565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:08.408  [2024-12-09 11:02:25.357327] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:08.408   11:02:25 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1162 -- # sleep 1
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_integrity.log
00:12:09.786  hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:12:09.786  <VM-1-6-7> nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512
00:12:09.786  <VM-1-6-7> Starting 1 thread
00:12:09.786  <VM-1-6-7> 
00:12:09.786  nvme-host: (groupid=0, jobs=1): err= 0: pid=956: Mon Dec  9 11:02:25 2024
00:12:09.786    read: IOPS=1321, BW=222MiB/s (232MB/s)(2048MiB/9239msec)
00:12:09.786      slat (usec): min=52, max=17740, avg=2479.23, stdev=3565.21
00:12:09.786      clat (msec): min=7, max=394, avg=132.95, stdev=74.06
00:12:09.786       lat (msec): min=7, max=396, avg=135.43, stdev=73.68
00:12:09.786      clat percentiles (msec):
00:12:09.786       |  1.00th=[   11],  5.00th=[   20], 10.00th=[   42], 20.00th=[   72],
00:12:09.786       | 30.00th=[   87], 40.00th=[  107], 50.00th=[  125], 60.00th=[  144],
00:12:09.786       | 70.00th=[  167], 80.00th=[  194], 90.00th=[  232], 95.00th=[  268],
00:12:09.786       | 99.00th=[  334], 99.50th=[  359], 99.90th=[  384], 99.95th=[  388],
00:12:09.786       | 99.99th=[  397]
00:12:09.786    write: IOPS=1404, BW=236MiB/s (247MB/s)(2048MiB/8691msec); 0 zone resets
00:12:09.786      slat (usec): min=302, max=79383, avg=21588.77, stdev=14858.16
00:12:09.786      clat (msec): min=6, max=376, avg=120.99, stdev=69.09
00:12:09.786       lat (msec): min=7, max=411, avg=142.58, stdev=72.16
00:12:09.786      clat percentiles (msec):
00:12:09.786       |  1.00th=[    9],  5.00th=[   18], 10.00th=[   30], 20.00th=[   64],
00:12:09.786       | 30.00th=[   78], 40.00th=[   99], 50.00th=[  112], 60.00th=[  130],
00:12:09.786       | 70.00th=[  153], 80.00th=[  176], 90.00th=[  215], 95.00th=[  243],
00:12:09.786       | 99.00th=[  313], 99.50th=[  376], 99.90th=[  376], 99.95th=[  376],
00:12:09.786       | 99.99th=[  376]
00:12:09.786     bw (  KiB/s): min=24760, max=382488, per=96.57%, avg=233016.89, stdev=89073.00, samples=18
00:12:09.786     iops        : min=  124, max= 2048, avg=1356.44, stdev=600.56, samples=18
00:12:09.786    lat (msec)   : 10=1.31%, 20=3.98%, 50=8.31%, 100=25.48%, 250=55.02%
00:12:09.786    lat (msec)   : 500=5.91%
00:12:09.786    cpu          : usr=93.56%, sys=2.44%, ctx=435, majf=0, minf=34
00:12:09.786    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1%
00:12:09.786       submit    : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6%
00:12:09.786       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:12:09.786       issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:09.786       latency   : target=0, window=0, percentile=100.00%, depth=512
00:12:09.786  
00:12:09.786  Run status group 0 (all jobs):
00:12:09.786     READ: bw=222MiB/s (232MB/s), 222MiB/s-222MiB/s (232MB/s-232MB/s), io=2048MiB (2147MB), run=9239-9239msec
00:12:09.786    WRITE: bw=236MiB/s (247MB/s), 236MiB/s-236MiB/s (247MB/s-247MB/s), io=2048MiB (2147MB), run=8691-8691msec
00:12:09.786  
00:12:09.786  Disk stats (read/write):
00:12:09.786    sdb: ios=12349/12184, merge=85/87, ticks=137124/100872, in_queue=237997, util=28.65%
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...'
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:12:09.786  INFO: Shutting down virtual machine...
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:09.786   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:09.786    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:12:09.786    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=173919
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 173919
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:09.787  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:09.787  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:09.787  INFO: VM1 is shutting down - wait a while to complete
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:09.787  INFO: Waiting for VMs to shutdown...
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:09.787   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:09.787    11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:09.788   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=173919
00:12:09.788   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 173919
00:12:09.788   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:09.788   11:02:26 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:10.724    11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=173919
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 173919
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:10.724   11:02:27 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:12.101   11:02:28 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:12:13.038  INFO: All VMs successfully shut down
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:12:13.038   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@518 -- # xtrace_disable
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:13.039  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:12:13.039  INFO: Creating new VM in /root/vhost_test/vms/1
00:12:13.039  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:12:13.039  INFO: TASK MASK: 6-7
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # local node_num=0
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # local boot_disk_present=false
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:12:13.039  INFO: NUMA NODE: 0
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # [[ -n '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # [[ -z '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # IFS=,
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@701 -- # read -r disk disk_type _
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # [[ -z '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@704 -- # case $disk_type in
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:12:13.039  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # [[ -n '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@785 -- # (( 0 ))
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:12:13.039  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # cat
00:12:13.039    11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10100
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 10101
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@829 -- # echo 10102
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@832 -- # [[ -z '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@834 -- # echo 10104
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # echo 101
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # [[ -z '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@838 -- # [[ -z '' ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@843 -- # local run_all=false
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@844 -- # local vms_to_run=
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@846 -- # getopts a-: optchar
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@856 -- # false
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # shift 0
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@860 -- # for vm in "$@"
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@871 -- # vm_is_running 1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:13.039   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:12:13.040  INFO: running /root/vhost_test/vms/1/run.sh
00:12:13.040   11:02:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:12:13.040  Running VM in /root/vhost_test/vms/1
00:12:13.299  [2024-12-09 11:02:30.135146] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:12:13.299  Waiting for QEMU pid file
00:12:14.234  === qemu.log ===
00:12:14.234  === qemu.log ===
00:12:14.234   11:02:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1
00:12:14.234   11:02:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@913 -- # assert_number 60
00:12:14.234   11:02:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:12:14.235   11:02:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@281 -- # return 0
00:12:14.235   11:02:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@915 -- # xtrace_disable
00:12:14.235   11:02:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:14.235  INFO: Waiting for VMs to boot
00:12:14.235  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:12:29.120  [2024-12-09 11:02:45.328538] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9
00:12:51.049  
00:12:51.049  INFO: VM1 ready
00:12:51.049  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:51.049  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:52.426  INFO: all VMs ready
00:12:52.426   11:03:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # return 0
00:12:52.426   11:03:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart=
00:12:52.426   11:03:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_scsi 1
00:12:52.426   11:03:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]]
00:12:52.426   11:03:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1
00:12:52.426   11:03:08 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1014 -- # local 'script=shopt -s nullglob;
00:12:52.426  	for entry in /sys/block/sd*; do
00:12:52.426  		disk_type="$(cat $entry/device/vendor)";
00:12:52.426  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:52.426  			fname=$(basename $entry);
00:12:52.426  			echo -n " $fname";
00:12:52.426  		fi;
00:12:52.426  	done'
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # echo 'shopt -s nullglob;
00:12:52.426  	for entry in /sys/block/sd*; do
00:12:52.426  		disk_type="$(cat $entry/device/vendor)";
00:12:52.426  		if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then
00:12:52.426  			fname=$(basename $entry);
00:12:52.426  			echo -n " $fname";
00:12:52.426  		fi;
00:12:52.426  	done'
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # vm_exec 1 bash -s
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:52.426     11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:52.426     11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:52.426     11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.426     11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.426     11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:52.426     11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s
00:12:52.426  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1016 -- # SCSI_DISK=' sdb'
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1018 -- # [[ -z  sdb ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=' sdb'
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[  sdb != \ \s\d\b ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...'
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:12:52.426  INFO: Shutting down virtual machine...
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # vm_list_all
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # vms=()
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@466 -- # local vms
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@492 -- # vm_shutdown 1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # vm_is_running 1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=183751
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 183751
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:12:52.426  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@432 -- # set +e
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@338 -- # local vm_num=1
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@339 -- # shift
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:12:52.426    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:12:52.426   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:12:52.426  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:12:52.685  INFO: VM1 is shutting down - wait a while to complete
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@435 -- # set -e
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:12:52.685  INFO: Waiting for VMs to shutdown...
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:52.685   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:52.685    11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:52.686   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=183751
00:12:52.686   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 183751
00:12:52.686   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:52.686   11:03:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@376 -- # local vm_pid
00:12:53.620    11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@377 -- # vm_pid=183751
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@379 -- # /bin/kill -0 183751
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@380 -- # return 0
00:12:53.620   11:03:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # vm_is_running 1
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@309 -- # return 0
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 1
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:12:54.554   11:03:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@500 -- # sleep 1
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:12:55.930  INFO: All VMs successfully shut down
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@505 -- # return 0
00:12:55.930   11:03:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0
00:12:55.930  [2024-12-09 11:03:12.757897] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE)
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@202 -- # local rc=0
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@210 -- # local vhost_dir
00:12:57.305    11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0
00:12:57.305    11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0
00:12:57.305    11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:12:57.305    11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@220 -- # local vhost_pid
00:12:57.305    11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # vhost_pid=173038
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 173038) app'
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 173038) app'
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 173038) app'
00:12:57.305  INFO: killing vhost (PID 173038) app
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@224 -- # kill -INT 173038
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out=
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:12:57.305  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i = 0 ))
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 173038
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:12:57.305  .
00:12:57.305   11:03:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:12:58.240   11:03:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:12:58.240   11:03:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:58.240   11:03:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 173038
00:12:58.240   11:03:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo .
00:12:58.240  .
00:12:58.240   11:03:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ ))
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 ))
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 173038
00:12:59.614  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (173038) - No such process
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@231 -- # break
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@234 -- # kill -0 173038
00:12:59.614  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (173038) - No such process
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@239 -- # kill -0 173038
00:12:59.614  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (173038) - No such process
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@245 -- # is_pid_child 173038
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1686 -- # local pid=173038 _pid
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:12:59.614    11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1685 -- # jobs -pr
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1688 -- # read -r _pid
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1692 -- # return 1
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@261 -- # return 0
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:12:59.614  
00:12:59.614  real	1m46.844s
00:12:59.614  user	7m1.432s
00:12:59.614  sys	0m1.966s
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x
00:12:59.614  ************************************
00:12:59.614  END TEST vfio_user_virtio_scsi_restart_vm
00:12:59.614  ************************************
00:12:59.614   11:03:16 vfio_user_qemu -- vfio_user/vfio_user.sh@19 -- # run_test vfio_user_virtio_bdevperf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:12:59.614   11:03:16 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:59.614   11:03:16 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:59.614   11:03:16 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:12:59.614  ************************************
00:12:59.614  START TEST vfio_user_virtio_bdevperf
00:12:59.614  ************************************
00:12:59.614   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh
00:12:59.614  * Looking for test storage...
00:12:59.614  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:12:59.614    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:59.614     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:12:59.614     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:59.614    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@345 -- # : 1
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=1
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 1
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=2
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:59.615     11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 2
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # return 0
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:59.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:59.615  		--rc genhtml_branch_coverage=1
00:12:59.615  		--rc genhtml_function_coverage=1
00:12:59.615  		--rc genhtml_legend=1
00:12:59.615  		--rc geninfo_all_blocks=1
00:12:59.615  		--rc geninfo_unexecuted_blocks=1
00:12:59.615  		
00:12:59.615  		'
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:59.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:59.615  		--rc genhtml_branch_coverage=1
00:12:59.615  		--rc genhtml_function_coverage=1
00:12:59.615  		--rc genhtml_legend=1
00:12:59.615  		--rc geninfo_all_blocks=1
00:12:59.615  		--rc geninfo_unexecuted_blocks=1
00:12:59.615  		
00:12:59.615  		'
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:59.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:59.615  		--rc genhtml_branch_coverage=1
00:12:59.615  		--rc genhtml_function_coverage=1
00:12:59.615  		--rc genhtml_legend=1
00:12:59.615  		--rc geninfo_all_blocks=1
00:12:59.615  		--rc geninfo_unexecuted_blocks=1
00:12:59.615  		
00:12:59.615  		'
00:12:59.615    11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:59.615  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:59.615  		--rc genhtml_branch_coverage=1
00:12:59.615  		--rc genhtml_function_coverage=1
00:12:59.615  		--rc genhtml_legend=1
00:12:59.615  		--rc geninfo_all_blocks=1
00:12:59.615  		--rc geninfo_unexecuted_blocks=1
00:12:59.615  		
00:12:59.615  		'
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@9 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@11 -- # vfu_dir=/tmp/vfu_devices
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@12 -- # rm -rf /tmp/vfu_devices
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@13 -- # mkdir -p /tmp/vfu_devices
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@17 -- # spdk_tgt_pid=192539
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0xf -L vfu_virtio
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@18 -- # waitforlisten 192539
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 192539 ']'
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:59.615  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:59.615   11:03:16 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:12:59.615  [2024-12-09 11:03:16.493711] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:12:59.615  [2024-12-09 11:03:16.493859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192539 ]
00:12:59.615  EAL: No free 2048 kB hugepages reported on node 1
00:12:59.615  [2024-12-09 11:03:16.606463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:59.873  [2024-12-09 11:03:16.709676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:59.873  [2024-12-09 11:03:16.709747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:59.873  [2024-12-09 11:03:16.709847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:59.873  [2024-12-09 11:03:16.709831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:00.808   11:03:17 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:00.808   11:03:17 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:13:00.808   11:03:17 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc0 64 512
00:13:00.808  malloc0
00:13:00.808   11:03:17 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc1 64 512
00:13:01.066  malloc1
00:13:01.066   11:03:18 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@22 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc2 64 512
00:13:01.324  malloc2
00:13:01.324   11:03:18 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_tgt_set_base_path /tmp/vfu_devices
00:13:01.582   11:03:18 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_blk_endpoint vfu.blk --bdev-name malloc0 --cpumask=0x1 --num-queues=2 --qsize=256 --packed-ring
00:13:01.840  [2024-12-09 11:03:18.716909] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.blk_bar4, devmem_fd 470
00:13:01.840  [2024-12-09 11:03:18.716951] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.blk: get device information, fd 470
00:13:01.840  [2024-12-09 11:03:18.717080] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 0
00:13:01.840  [2024-12-09 11:03:18.717124] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 1
00:13:01.840  [2024-12-09 11:03:18.717153] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 2
00:13:01.840  [2024-12-09 11:03:18.717165] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 3
00:13:01.840   11:03:18 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_scsi_endpoint vfu.scsi --cpumask 0x2 --num-io-queues=2 --qsize=256 --packed-ring
00:13:02.097  [2024-12-09 11:03:18.917693] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.scsi_bar4, devmem_fd 574
00:13:02.097  [2024-12-09 11:03:18.917725] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get device information, fd 574
00:13:02.097  [2024-12-09 11:03:18.917802] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 0
00:13:02.097  [2024-12-09 11:03:18.917820] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 1
00:13:02.097  [2024-12-09 11:03:18.917829] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 2
00:13:02.097  [2024-12-09 11:03:18.917842] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 3
00:13:02.097   11:03:18 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=0 --bdev-name malloc1
00:13:02.356  [2024-12-09 11:03:19.138601] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 0 using bdev 'malloc1'
00:13:02.356   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=1 --bdev-name malloc2
00:13:02.356  [2024-12-09 11:03:19.347460] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 1 using bdev 'malloc2'
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@37 -- # bdevperf=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@38 -- # bdevperf_rpc_sock=/tmp/bdevperf.sock
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf -r /tmp/bdevperf.sock -g -s 2048 -q 256 -o 4096 -w randrw -M 50 -t 30 -m 0xf0 -L vfio_pci -L virtio_vfio_user
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@41 -- # bdevperf_pid=192970
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@42 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@43 -- # waitforlisten 192970 /tmp/bdevperf.sock
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 192970 ']'
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/bdevperf.sock
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...'
00:13:02.614  Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:02.614   11:03:19 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:02.614  [2024-12-09 11:03:19.446243] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:13:02.615  [2024-12-09 11:03:19.446347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf0 -m 2048 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid192970 ]
00:13:02.615  EAL: No free 2048 kB hugepages reported on node 1
00:13:03.547  [2024-12-09 11:03:20.248622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:03.547  [2024-12-09 11:03:20.374603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:13:03.547  [2024-12-09 11:03:20.374679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:13:03.547  [2024-12-09 11:03:20.374727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:13:03.547  [2024-12-09 11:03:20.374744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:13:04.114   11:03:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:04.114   11:03:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:13:04.114   11:03:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type scsi --trtype vfio-user --traddr /tmp/vfu_devices/vfu.scsi VirtioScsi0
00:13:04.374  [2024-12-09 11:03:21.267364] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.scsi: attached successfully
00:13:04.374  [2024-12-09 11:03:21.269549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.270499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.271559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.272547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.273535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:04.374  [2024-12-09 11:03:21.273581] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7fde063a4000
00:13:04.374  [2024-12-09 11:03:21.274536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.275582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.276582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.277547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.278560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.374  [2024-12-09 11:03:21.280490] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:04.374  [2024-12-09 11:03:21.293944] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /tmp/vfu_devices/vfu.scsi Setup Successfully
00:13:04.374  [2024-12-09 11:03:21.295736] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x4
00:13:04.374  [2024-12-09 11:03:21.296671] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x2000-0x2003, len = 4
00:13:04.374  [2024-12-09 11:03:21.296738] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:04.374  [2024-12-09 11:03:21.297676] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.297718] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:13:04.374  [2024-12-09 11:03:21.297730] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:13:04.374  [2024-12-09 11:03:21.297741] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:13:04.374  [2024-12-09 11:03:21.298684] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.298719] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:13:04.374  [2024-12-09 11:03:21.298749] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:04.374  [2024-12-09 11:03:21.299698] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.299733] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:13:04.374  [2024-12-09 11:03:21.299768] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:04.374  [2024-12-09 11:03:21.299797] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:13:04.374  [2024-12-09 11:03:21.300708] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.300742] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x1
00:13:04.374  [2024-12-09 11:03:21.300751] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:13:04.374  [2024-12-09 11:03:21.301720] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.301751] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:13:04.374  [2024-12-09 11:03:21.301836] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:04.374  [2024-12-09 11:03:21.302722] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.302753] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1
00:13:04.374  [2024-12-09 11:03:21.302823] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:04.374  [2024-12-09 11:03:21.302880] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:13:04.374  [2024-12-09 11:03:21.303725] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.303756] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x3
00:13:04.374  [2024-12-09 11:03:21.303773] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:13:04.374  [2024-12-09 11:03:21.304729] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.374  [2024-12-09 11:03:21.304762] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:13:04.374  [2024-12-09 11:03:21.304832] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:04.374  [2024-12-09 11:03:21.305743] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:13:04.374  [2024-12-09 11:03:21.305783] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x0
00:13:04.374  [2024-12-09 11:03:21.306746] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:13:04.374  [2024-12-09 11:03:21.306786] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_LO with 0x10000007
00:13:04.374  [2024-12-09 11:03:21.307751] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4
00:13:04.375  [2024-12-09 11:03:21.307806] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x1
00:13:04.375  [2024-12-09 11:03:21.308761] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4
00:13:04.375  [2024-12-09 11:03:21.308807] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_HI with 0x5
00:13:04.375  [2024-12-09 11:03:21.308845] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10000007
00:13:04.375  [2024-12-09 11:03:21.309765] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:13:04.375  [2024-12-09 11:03:21.309819] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x0
00:13:04.375  [2024-12-09 11:03:21.310777] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:13:04.375  [2024-12-09 11:03:21.310811] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_LO with 0x3
00:13:04.375  [2024-12-09 11:03:21.310821] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x3
00:13:04.375  [2024-12-09 11:03:21.311803] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4
00:13:04.375  [2024-12-09 11:03:21.311817] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x1
00:13:04.375  [2024-12-09 11:03:21.312815] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4
00:13:04.375  [2024-12-09 11:03:21.312828] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_HI with 0x1
00:13:04.375  [2024-12-09 11:03:21.312842] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x100000003
00:13:04.375  [2024-12-09 11:03:21.312871] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100000003
00:13:04.375  [2024-12-09 11:03:21.313820] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.375  [2024-12-09 11:03:21.313837] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3
00:13:04.375  [2024-12-09 11:03:21.313896] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:04.375  [2024-12-09 11:03:21.313943] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:13:04.375  [2024-12-09 11:03:21.314832] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:04.375  [2024-12-09 11:03:21.314848] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xb
00:13:04.375  [2024-12-09 11:03:21.314857] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:13:04.375  [2024-12-09 11:03:21.315844] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.375  [2024-12-09 11:03:21.315857] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:13:04.375  [2024-12-09 11:03:21.315894] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:04.375  [2024-12-09 11:03:21.316850] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.375  [2024-12-09 11:03:21.316863] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:04.375  [2024-12-09 11:03:21.317858] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:04.375  [2024-12-09 11:03:21.317873] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:13:04.375  [2024-12-09 11:03:21.317899] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:13:04.375  [2024-12-09 11:03:21.318861] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.375  [2024-12-09 11:03:21.318874] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:04.375  [2024-12-09 11:03:21.319870] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:04.375  [2024-12-09 11:03:21.319884] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x69aec000
00:13:04.375  [2024-12-09 11:03:21.320878] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:04.375  [2024-12-09 11:03:21.320892] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:13:04.375  [2024-12-09 11:03:21.321892] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:04.375  [2024-12-09 11:03:21.321906] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x69aed000
00:13:04.375  [2024-12-09 11:03:21.322902] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:04.375  [2024-12-09 11:03:21.322916] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:04.375  [2024-12-09 11:03:21.323915] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:04.375  [2024-12-09 11:03:21.323929] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x69aee000
00:13:04.375  [2024-12-09 11:03:21.324920] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:04.375  [2024-12-09 11:03:21.324951] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:13:04.375  [2024-12-09 11:03:21.325929] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:04.375  [2024-12-09 11:03:21.325943] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x0
00:13:04.375  [2024-12-09 11:03:21.326935] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:04.375  [2024-12-09 11:03:21.326948] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:04.375  [2024-12-09 11:03:21.326959] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 0
00:13:04.375  [2024-12-09 11:03:21.326967] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 0
00:13:04.375  [2024-12-09 11:03:21.326990] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 0 successfully
00:13:04.375  [2024-12-09 11:03:21.327032] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:13:04.375  [2024-12-09 11:03:21.327063] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069aec000
00:13:04.375  [2024-12-09 11:03:21.327080] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069aed000
00:13:04.375  [2024-12-09 11:03:21.327092] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069aee000
00:13:04.375  [2024-12-09 11:03:21.327941] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.375  [2024-12-09 11:03:21.327958] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:04.375  [2024-12-09 11:03:21.328950] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:04.375  [2024-12-09 11:03:21.328966] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:13:04.375  [2024-12-09 11:03:21.329008] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:13:04.375  [2024-12-09 11:03:21.329952] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.375  [2024-12-09 11:03:21.329973] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:04.375  [2024-12-09 11:03:21.330956] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:04.375  [2024-12-09 11:03:21.330975] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x69ae8000
00:13:04.375  [2024-12-09 11:03:21.331967] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:04.375  [2024-12-09 11:03:21.331983] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:13:04.375  [2024-12-09 11:03:21.332990] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:04.375  [2024-12-09 11:03:21.333006] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x69ae9000
00:13:04.375  [2024-12-09 11:03:21.333995] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:04.375  [2024-12-09 11:03:21.334012] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:04.375  [2024-12-09 11:03:21.335000] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:04.375  [2024-12-09 11:03:21.335017] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x69aea000
00:13:04.375  [2024-12-09 11:03:21.336008] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:04.375  [2024-12-09 11:03:21.336024] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:13:04.375  [2024-12-09 11:03:21.337018] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:04.375  [2024-12-09 11:03:21.337034] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x1
00:13:04.375  [2024-12-09 11:03:21.338026] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:04.375  [2024-12-09 11:03:21.338047] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:04.375  [2024-12-09 11:03:21.338055] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 1
00:13:04.375  [2024-12-09 11:03:21.338065] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 1
00:13:04.375  [2024-12-09 11:03:21.338075] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 1 successfully
00:13:04.375  [2024-12-09 11:03:21.338118] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:13:04.375  [2024-12-09 11:03:21.338184] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae8000
00:13:04.375  [2024-12-09 11:03:21.338205] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae9000
00:13:04.375  [2024-12-09 11:03:21.338225] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069aea000
00:13:04.375  [2024-12-09 11:03:21.339037] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.375  [2024-12-09 11:03:21.339051] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:13:04.375  [2024-12-09 11:03:21.340041] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:04.375  [2024-12-09 11:03:21.340055] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 2 PCI_COMMON_Q_SIZE with 0x100
00:13:04.375  [2024-12-09 11:03:21.340107] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 2, size 256
00:13:04.375  [2024-12-09 11:03:21.341057] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.375  [2024-12-09 11:03:21.341070] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:13:04.376  [2024-12-09 11:03:21.342064] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:04.376  [2024-12-09 11:03:21.342078] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCLO with 0x69ae4000
00:13:04.376  [2024-12-09 11:03:21.343068] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:04.376  [2024-12-09 11:03:21.343082] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCHI with 0x2000
00:13:04.376  [2024-12-09 11:03:21.344082] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:04.376  [2024-12-09 11:03:21.344112] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILLO with 0x69ae5000
00:13:04.376  [2024-12-09 11:03:21.345129] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:04.376  [2024-12-09 11:03:21.345143] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:04.376  [2024-12-09 11:03:21.346115] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:04.376  [2024-12-09 11:03:21.346128] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDLO with 0x69ae6000
00:13:04.376  [2024-12-09 11:03:21.347126] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:04.376  [2024-12-09 11:03:21.347156] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDHI with 0x2000
00:13:04.376  [2024-12-09 11:03:21.348135] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:04.376  [2024-12-09 11:03:21.348169] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x2
00:13:04.376  [2024-12-09 11:03:21.349141] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:04.376  [2024-12-09 11:03:21.349170] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:04.376  [2024-12-09 11:03:21.349180] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 2
00:13:04.376  [2024-12-09 11:03:21.349188] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 2
00:13:04.376  [2024-12-09 11:03:21.349201] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 2 successfully
00:13:04.376  [2024-12-09 11:03:21.349260] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 2 addresses:
00:13:04.376  [2024-12-09 11:03:21.349305] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae4000
00:13:04.376  [2024-12-09 11:03:21.349332] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae5000
00:13:04.376  [2024-12-09 11:03:21.349349] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ae6000
00:13:04.376  [2024-12-09 11:03:21.350165] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.376  [2024-12-09 11:03:21.350200] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:13:04.376  [2024-12-09 11:03:21.351155] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2
00:13:04.376  [2024-12-09 11:03:21.351192] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 3 PCI_COMMON_Q_SIZE with 0x100
00:13:04.376  [2024-12-09 11:03:21.351242] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 3, size 256
00:13:04.376  [2024-12-09 11:03:21.352161] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:04.376  [2024-12-09 11:03:21.352194] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:13:04.376  [2024-12-09 11:03:21.353175] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4
00:13:04.376  [2024-12-09 11:03:21.353208] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCLO with 0x69ae0000
00:13:04.376  [2024-12-09 11:03:21.354177] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4
00:13:04.376  [2024-12-09 11:03:21.354210] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCHI with 0x2000
00:13:04.376  [2024-12-09 11:03:21.355181] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4
00:13:04.376  [2024-12-09 11:03:21.355214] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILLO with 0x69ae1000
00:13:04.376  [2024-12-09 11:03:21.356185] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4
00:13:04.376  [2024-12-09 11:03:21.356218] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:04.376  [2024-12-09 11:03:21.357199] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4
00:13:04.376  [2024-12-09 11:03:21.357232] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDLO with 0x69ae2000
00:13:04.376  [2024-12-09 11:03:21.358207] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4
00:13:04.376  [2024-12-09 11:03:21.358240] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDHI with 0x2000
00:13:04.376  [2024-12-09 11:03:21.359214] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2
00:13:04.376  [2024-12-09 11:03:21.359256] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x3
00:13:04.376  [2024-12-09 11:03:21.360220] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:04.376  [2024-12-09 11:03:21.360236] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:04.376  [2024-12-09 11:03:21.360245] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 3
00:13:04.376  [2024-12-09 11:03:21.360254] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 3
00:13:04.376  [2024-12-09 11:03:21.360263] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 3 successfully
00:13:04.376  [2024-12-09 11:03:21.360294] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 3 addresses:
00:13:04.376  [2024-12-09 11:03:21.360348] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ae0000
00:13:04.376  [2024-12-09 11:03:21.360384] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ae1000
00:13:04.376  [2024-12-09 11:03:21.360404] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ae2000
00:13:04.376  [2024-12-09 11:03:21.361226] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.376  [2024-12-09 11:03:21.361255] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb
00:13:04.376  [2024-12-09 11:03:21.361317] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:04.376  [2024-12-09 11:03:21.361368] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:13:04.376  [2024-12-09 11:03:21.362237] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:04.376  [2024-12-09 11:03:21.362266] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xf
00:13:04.376  [2024-12-09 11:03:21.362276] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:13:04.376  [2024-12-09 11:03:21.362283] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.scsi
00:13:04.376  [2024-12-09 11:03:21.364524] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.scsi is started with ret 0
00:13:04.376  [2024-12-09 11:03:21.365607] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:04.376  [2024-12-09 11:03:21.365641] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xf
00:13:04.376  [2024-12-09 11:03:21.365681] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:13:04.635  VirtioScsi0t0 VirtioScsi0t1
00:13:04.635   11:03:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type blk --trtype vfio-user --traddr /tmp/vfu_devices/vfu.blk VirtioBlk0
00:13:04.635  [2024-12-09 11:03:21.636430] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.blk: attached successfully
00:13:04.635  [2024-12-09 11:03:21.638558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.635  [2024-12-09 11:03:21.639556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.635  [2024-12-09 11:03:21.640566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.635  [2024-12-09 11:03:21.641589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.635  [2024-12-09 11:03:21.642587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32
00:13:04.635  [2024-12-09 11:03:21.642626] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7fde06320000
00:13:04.635  [2024-12-09 11:03:21.643652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.635  [2024-12-09 11:03:21.644613] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.895  [2024-12-09 11:03:21.645667] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.895  [2024-12-09 11:03:21.646649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:13:04.895  [2024-12-09 11:03:21.647707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:13:04.895  [2024-12-09 11:03:21.649807] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:04.895  [2024-12-09 11:03:21.662483] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user1, Path /tmp/vfu_devices/vfu.blk Setup Successfully
00:13:04.895  [2024-12-09 11:03:21.663829] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:04.895  [2024-12-09 11:03:21.664814] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:04.895  [2024-12-09 11:03:21.664837] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:13:04.895  [2024-12-09 11:03:21.664851] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0
00:13:04.895  [2024-12-09 11:03:21.664862] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:13:04.895  [2024-12-09 11:03:21.665813] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.895  [2024-12-09 11:03:21.665827] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:13:04.895  [2024-12-09 11:03:21.665883] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:04.895  [2024-12-09 11:03:21.666818] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.895  [2024-12-09 11:03:21.666832] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:13:04.895  [2024-12-09 11:03:21.666863] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:04.895  [2024-12-09 11:03:21.666890] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 1
00:13:04.895  [2024-12-09 11:03:21.667830] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:04.895  [2024-12-09 11:03:21.667844] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x1
00:13:04.895  [2024-12-09 11:03:21.667855] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1
00:13:04.895  [2024-12-09 11:03:21.668835] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.895  [2024-12-09 11:03:21.668856] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:13:04.895  [2024-12-09 11:03:21.668902] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:04.895  [2024-12-09 11:03:21.669844] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.895  [2024-12-09 11:03:21.669860] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1
00:13:04.895  [2024-12-09 11:03:21.669883] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 1
00:13:04.895  [2024-12-09 11:03:21.669899] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 3
00:13:04.895  [2024-12-09 11:03:21.670858] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:04.896  [2024-12-09 11:03:21.670874] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x3
00:13:04.896  [2024-12-09 11:03:21.670886] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3
00:13:04.896  [2024-12-09 11:03:21.671866] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.896  [2024-12-09 11:03:21.671880] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:13:04.896  [2024-12-09 11:03:21.671917] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:04.896  [2024-12-09 11:03:21.672872] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:13:04.896  [2024-12-09 11:03:21.672887] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x0
00:13:04.896  [2024-12-09 11:03:21.673876] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:13:04.896  [2024-12-09 11:03:21.673891] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_LO with 0x10007646
00:13:04.896  [2024-12-09 11:03:21.674889] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4
00:13:04.896  [2024-12-09 11:03:21.674903] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x1
00:13:04.896  [2024-12-09 11:03:21.675889] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4
00:13:04.896  [2024-12-09 11:03:21.675903] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_HI with 0x5
00:13:04.896  [2024-12-09 11:03:21.675934] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10007646
00:13:04.896  [2024-12-09 11:03:21.676905] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:13:04.896  [2024-12-09 11:03:21.676920] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x0
00:13:04.896  [2024-12-09 11:03:21.677919] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:13:04.896  [2024-12-09 11:03:21.677932] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_LO with 0x3446
00:13:04.896  [2024-12-09 11:03:21.677944] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x3446
00:13:04.896  [2024-12-09 11:03:21.678927] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4
00:13:04.896  [2024-12-09 11:03:21.678943] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x1
00:13:04.896  [2024-12-09 11:03:21.679951] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4
00:13:04.896  [2024-12-09 11:03:21.679968] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_HI with 0x1
00:13:04.896  [2024-12-09 11:03:21.679980] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x100003446
00:13:04.896  [2024-12-09 11:03:21.680008] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100003446
00:13:04.896  [2024-12-09 11:03:21.680955] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.896  [2024-12-09 11:03:21.680970] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3
00:13:04.896  [2024-12-09 11:03:21.680999] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 3
00:13:04.896  [2024-12-09 11:03:21.681026] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status b
00:13:04.896  [2024-12-09 11:03:21.681970] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:04.896  [2024-12-09 11:03:21.681983] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xb
00:13:04.896  [2024-12-09 11:03:21.681996] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b
00:13:04.896  [2024-12-09 11:03:21.682978] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.896  [2024-12-09 11:03:21.682998] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:13:04.896  [2024-12-09 11:03:21.683027] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:04.896  [2024-12-09 11:03:21.683059] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:13:04.896  [2024-12-09 11:03:21.683990] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:13:04.896  [2024-12-09 11:03:21.684026] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x14, length 0x4
00:13:04.896  [2024-12-09 11:03:21.685003] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2014-0x2017, len = 4
00:13:04.896  [2024-12-09 11:03:21.685045] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x8
00:13:04.896  [2024-12-09 11:03:21.686003] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2000-0x2007, len = 8
00:13:04.896  [2024-12-09 11:03:21.686039] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2
00:13:04.896  [2024-12-09 11:03:21.687013] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2
00:13:04.896  [2024-12-09 11:03:21.687054] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x8, length 0x4
00:13:04.896  [2024-12-09 11:03:21.688022] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2008-0x200B, len = 4
00:13:04.896  [2024-12-09 11:03:21.688058] virtio_vfio_user.c:  32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0xc, length 0x4
00:13:04.896  [2024-12-09 11:03:21.689034] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x200C-0x200F, len = 4
00:13:04.896  [2024-12-09 11:03:21.690043] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:04.896  [2024-12-09 11:03:21.690060] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:04.896  [2024-12-09 11:03:21.691061] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:13:04.896  [2024-12-09 11:03:21.691078] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 0 PCI_COMMON_Q_SIZE with 0x100
00:13:04.896  [2024-12-09 11:03:21.691119] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256
00:13:04.896  [2024-12-09 11:03:21.692076] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:04.896  [2024-12-09 11:03:21.692125] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:04.896  [2024-12-09 11:03:21.693079] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:13:04.896  [2024-12-09 11:03:21.693097] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x69adc000
00:13:04.896  [2024-12-09 11:03:21.694087] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:13:04.896  [2024-12-09 11:03:21.694138] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000
00:13:04.896  [2024-12-09 11:03:21.695111] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:13:04.896  [2024-12-09 11:03:21.695146] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x69add000
00:13:04.896  [2024-12-09 11:03:21.696121] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:13:04.896  [2024-12-09 11:03:21.696170] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:04.896  [2024-12-09 11:03:21.697132] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:13:04.896  [2024-12-09 11:03:21.697168] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x69ade000
00:13:04.896  [2024-12-09 11:03:21.698148] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:13:04.896  [2024-12-09 11:03:21.698181] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000
00:13:04.896  [2024-12-09 11:03:21.699149] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:13:04.896  [2024-12-09 11:03:21.699183] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x0
00:13:04.896  [2024-12-09 11:03:21.700152] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:04.896  [2024-12-09 11:03:21.700186] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:04.896  [2024-12-09 11:03:21.700198] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 0
00:13:04.896  [2024-12-09 11:03:21.700209] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 0
00:13:04.896  [2024-12-09 11:03:21.700227] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 0 successfully
00:13:04.896  [2024-12-09 11:03:21.700271] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses:
00:13:04.896  [2024-12-09 11:03:21.700310] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069adc000
00:13:04.896  [2024-12-09 11:03:21.700327] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069add000
00:13:04.896  [2024-12-09 11:03:21.700342] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ade000
00:13:04.896  [2024-12-09 11:03:21.701173] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:04.896  [2024-12-09 11:03:21.701204] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:04.896  [2024-12-09 11:03:21.702173] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2
00:13:04.896  [2024-12-09 11:03:21.702204] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 1 PCI_COMMON_Q_SIZE with 0x100
00:13:04.896  [2024-12-09 11:03:21.702246] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256
00:13:04.896  [2024-12-09 11:03:21.703182] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:04.896  [2024-12-09 11:03:21.703212] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:04.896  [2024-12-09 11:03:21.704186] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4
00:13:04.896  [2024-12-09 11:03:21.704217] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x69ad8000
00:13:04.896  [2024-12-09 11:03:21.705194] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4
00:13:04.896  [2024-12-09 11:03:21.705225] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000
00:13:04.896  [2024-12-09 11:03:21.706207] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4
00:13:04.896  [2024-12-09 11:03:21.706238] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x69ad9000
00:13:04.896  [2024-12-09 11:03:21.707225] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4
00:13:04.896  [2024-12-09 11:03:21.707255] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000
00:13:04.896  [2024-12-09 11:03:21.708230] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4
00:13:04.897  [2024-12-09 11:03:21.708256] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x69ada000
00:13:04.897  [2024-12-09 11:03:21.709237] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4
00:13:04.897  [2024-12-09 11:03:21.709269] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000
00:13:04.897  [2024-12-09 11:03:21.710247] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2
00:13:04.897  [2024-12-09 11:03:21.710278] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x1
00:13:04.897  [2024-12-09 11:03:21.711255] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:04.897  [2024-12-09 11:03:21.711285] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1
00:13:04.897  [2024-12-09 11:03:21.711295] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 1
00:13:04.897  [2024-12-09 11:03:21.711303] vfu_virtio.c:  71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 1
00:13:04.897  [2024-12-09 11:03:21.711314] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 1 successfully
00:13:04.897  [2024-12-09 11:03:21.711393] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses:
00:13:04.897  [2024-12-09 11:03:21.711429] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: 	 desc_addr: 200069ad8000
00:13:04.897  [2024-12-09 11:03:21.711453] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: 	 aval_addr: 200069ad9000
00:13:04.897  [2024-12-09 11:03:21.711467] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: 	 used_addr: 200069ada000
00:13:04.897  [2024-12-09 11:03:21.712268] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.897  [2024-12-09 11:03:21.712303] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb
00:13:04.897  [2024-12-09 11:03:21.712355] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status b
00:13:04.897  [2024-12-09 11:03:21.712397] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status f
00:13:04.897  [2024-12-09 11:03:21.713277] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:04.897  [2024-12-09 11:03:21.713311] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xf
00:13:04.897  [2024-12-09 11:03:21.713321] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f
00:13:04.897  [2024-12-09 11:03:21.713330] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.blk
00:13:04.897  [2024-12-09 11:03:21.715428] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.blk is started with ret 0
00:13:04.897  [2024-12-09 11:03:21.716523] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:04.897  [2024-12-09 11:03:21.716555] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xf
00:13:04.897  [2024-12-09 11:03:21.716608] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status f
00:13:04.897  VirtioBlk0
00:13:04.897   11:03:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /tmp/bdevperf.sock perform_tests
00:13:04.897  Running I/O for 30 seconds...
00:13:06.838      91427.00 IOPS,   357.14 MiB/s
[2024-12-09T10:03:25.224Z]     91413.00 IOPS,   357.08 MiB/s
[2024-12-09T10:03:26.164Z]     91408.33 IOPS,   357.06 MiB/s
[2024-12-09T10:03:27.098Z]     91431.75 IOPS,   357.16 MiB/s
[2024-12-09T10:03:28.033Z]     91433.20 IOPS,   357.16 MiB/s
[2024-12-09T10:03:28.992Z]     91443.50 IOPS,   357.20 MiB/s
[2024-12-09T10:03:29.927Z]     91450.00 IOPS,   357.23 MiB/s
[2024-12-09T10:03:31.303Z]     91457.62 IOPS,   357.26 MiB/s
[2024-12-09T10:03:31.869Z]     91441.56 IOPS,   357.19 MiB/s
[2024-12-09T10:03:33.245Z]     91434.80 IOPS,   357.17 MiB/s
[2024-12-09T10:03:34.179Z]     91434.00 IOPS,   357.16 MiB/s
[2024-12-09T10:03:35.115Z]     91440.75 IOPS,   357.19 MiB/s
[2024-12-09T10:03:36.046Z]     91438.15 IOPS,   357.18 MiB/s
[2024-12-09T10:03:36.981Z]     91437.64 IOPS,   357.18 MiB/s
[2024-12-09T10:03:37.923Z]     91440.53 IOPS,   357.19 MiB/s
[2024-12-09T10:03:39.301Z]     91441.88 IOPS,   357.19 MiB/s
[2024-12-09T10:03:40.237Z]     91440.12 IOPS,   357.19 MiB/s
[2024-12-09T10:03:41.174Z]     91442.06 IOPS,   357.20 MiB/s
[2024-12-09T10:03:42.109Z]     91411.63 IOPS,   357.08 MiB/s
[2024-12-09T10:03:43.043Z]     91406.00 IOPS,   357.05 MiB/s
[2024-12-09T10:03:43.979Z]     91409.76 IOPS,   357.07 MiB/s
[2024-12-09T10:03:44.916Z]     91409.18 IOPS,   357.07 MiB/s
[2024-12-09T10:03:46.292Z]     91402.17 IOPS,   357.04 MiB/s
[2024-12-09T10:03:47.227Z]     91395.83 IOPS,   357.01 MiB/s
[2024-12-09T10:03:48.162Z]     91392.28 IOPS,   357.00 MiB/s
[2024-12-09T10:03:49.098Z]     91393.00 IOPS,   357.00 MiB/s
[2024-12-09T10:03:50.033Z]     91398.59 IOPS,   357.03 MiB/s
[2024-12-09T10:03:50.969Z]     91401.21 IOPS,   357.04 MiB/s
[2024-12-09T10:03:52.346Z]     91377.69 IOPS,   356.94 MiB/s
[2024-12-09T10:03:52.346Z]     91361.97 IOPS,   356.88 MiB/s
00:13:35.335                                                                                                  Latency(us)
00:13:35.335  
[2024-12-09T10:03:52.346Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:35.335  Job: VirtioScsi0t0 (Core Mask 0x10, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:13:35.335  	 VirtioScsi0t0       :      30.01   21168.32      82.69       0.00     0.00   12085.85    2040.55   14239.19
00:13:35.335  Job: VirtioScsi0t1 (Core Mask 0x20, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:13:35.335  	 VirtioScsi0t1       :      30.01   21167.87      82.69       0.00     0.00   12086.26    1951.19   14239.19
00:13:35.335  Job: VirtioBlk0 (Core Mask 0x40, workload: randrw, percentage: 50, depth: 256, IO size: 4096)
00:13:35.335  	 VirtioBlk0          :      30.01   49018.79     191.48       0.00     0.00    5216.99    1899.05    7089.80
00:13:35.335  
[2024-12-09T10:03:52.346Z]  ===================================================================================================================
00:13:35.335  
[2024-12-09T10:03:52.346Z]  Total                       :              91354.98     356.86       0.00     0.00    8400.52    1899.05   14239.19
00:13:35.335  {
00:13:35.335    "results": [
00:13:35.335      {
00:13:35.335        "job": "VirtioScsi0t0",
00:13:35.335        "core_mask": "0x10",
00:13:35.335        "workload": "randrw",
00:13:35.335        "percentage": 50,
00:13:35.335        "status": "finished",
00:13:35.335        "queue_depth": 256,
00:13:35.335        "io_size": 4096,
00:13:35.335        "runtime": 30.010312,
00:13:35.335        "iops": 21168.32374151925,
00:13:35.335        "mibps": 82.68876461530957,
00:13:35.335        "io_failed": 0,
00:13:35.335        "io_timeout": 0,
00:13:35.335        "avg_latency_us": 12085.848563177631,
00:13:35.335        "min_latency_us": 2040.5527272727272,
00:13:35.335        "max_latency_us": 14239.185454545455
00:13:35.335      },
00:13:35.335      {
00:13:35.335        "job": "VirtioScsi0t1",
00:13:35.335        "core_mask": "0x20",
00:13:35.335        "workload": "randrw",
00:13:35.335        "percentage": 50,
00:13:35.335        "status": "finished",
00:13:35.335        "queue_depth": 256,
00:13:35.335        "io_size": 4096,
00:13:35.335        "runtime": 30.010158,
00:13:35.335        "iops": 21167.865893941645,
00:13:35.335        "mibps": 82.68697614820955,
00:13:35.335        "io_failed": 0,
00:13:35.335        "io_timeout": 0,
00:13:35.335        "avg_latency_us": 12086.26004760037,
00:13:35.335        "min_latency_us": 1951.1854545454546,
00:13:35.335        "max_latency_us": 14239.185454545455
00:13:35.335      },
00:13:35.335      {
00:13:35.335        "job": "VirtioBlk0",
00:13:35.335        "core_mask": "0x40",
00:13:35.335        "workload": "randrw",
00:13:35.335        "percentage": 50,
00:13:35.335        "status": "finished",
00:13:35.335        "queue_depth": 256,
00:13:35.335        "io_size": 4096,
00:13:35.335        "runtime": 30.006063,
00:13:35.335        "iops": 49018.793301873695,
00:13:35.335        "mibps": 191.47966133544412,
00:13:35.335        "io_failed": 0,
00:13:35.335        "io_timeout": 0,
00:13:35.335        "avg_latency_us": 5216.991565274291,
00:13:35.335        "min_latency_us": 1899.0545454545454,
00:13:35.335        "max_latency_us": 7089.8036363636365
00:13:35.335      }
00:13:35.335    ],
00:13:35.335    "core_count": 3
00:13:35.335  }
00:13:35.335   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@52 -- # killprocess 192970
00:13:35.335   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 192970 ']'
00:13:35.335   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 192970
00:13:35.335    11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:13:35.336   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:35.336    11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 192970
00:13:35.336   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:13:35.336   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:13:35.336   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 192970'
00:13:35.336  killing process with pid 192970
00:13:35.336   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 192970
00:13:35.336  Received shutdown signal, test time was about 30.000000 seconds
00:13:35.336  
00:13:35.336                                                                                                  Latency(us)
00:13:35.336  
[2024-12-09T10:03:52.347Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:35.336  
[2024-12-09T10:03:52.347Z]  ===================================================================================================================
00:13:35.336  
[2024-12-09T10:03:52.347Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:35.336  [2024-12-09 11:03:51.982667] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:35.336   11:03:51 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 192970
00:13:35.336  [2024-12-09 11:03:51.982854] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1
00:13:35.336  [2024-12-09 11:03:51.982884] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0
00:13:35.336  [2024-12-09 11:03:51.982899] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:13:35.336  [2024-12-09 11:03:51.982909] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:13:35.336  [2024-12-09 11:03:51.982928] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 0
00:13:35.336  [2024-12-09 11:03:51.982940] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 1
00:13:35.336  [2024-12-09 11:03:51.982951] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:13:35.336  [2024-12-09 11:03:51.983849] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1
00:13:35.336  [2024-12-09 11:03:51.983871] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0
00:13:35.336  [2024-12-09 11:03:51.983893] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:35.336  [2024-12-09 11:03:51.984858] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:35.336  [2024-12-09 11:03:51.984876] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:35.336  [2024-12-09 11:03:51.985870] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:35.336  [2024-12-09 11:03:51.985887] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:35.336  [2024-12-09 11:03:51.985898] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 0
00:13:35.336  [2024-12-09 11:03:51.985911] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:13:35.336  [2024-12-09 11:03:51.986883] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2
00:13:35.336  [2024-12-09 11:03:51.986901] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:35.336  [2024-12-09 11:03:51.987890] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2
00:13:35.336  [2024-12-09 11:03:51.987909] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:35.336  [2024-12-09 11:03:51.987918] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 1
00:13:35.336  [2024-12-09 11:03:51.987930] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:13:35.336  [2024-12-09 11:03:51.987969] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.blk
00:13:35.336  [2024-12-09 11:03:51.990738] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:35.336  [2024-12-09 11:03:52.024874] virtio_vfio_user.c:  77:virtio_vfio_user_set_status: *DEBUG*: device status 0
00:13:35.336  [2024-12-09 11:03:52.024973] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1
00:13:35.336  [2024-12-09 11:03:52.025009] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0
00:13:35.336  [2024-12-09 11:03:52.025027] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0
00:13:35.336  [2024-12-09 11:03:52.025039] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:13:35.336  [2024-12-09 11:03:52.025055] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 0
00:13:35.336  [2024-12-09 11:03:52.025069] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 1
00:13:35.336  [2024-12-09 11:03:52.025078] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 2
00:13:35.336  [2024-12-09 11:03:52.025088] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 3
00:13:35.336  [2024-12-09 11:03:52.025096] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:13:35.336  [2024-12-09 11:03:52.025256] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:13:35.336  [2024-12-09 11:03:52.025281] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:13:35.336  [2024-12-09 11:03:52.025291] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting
00:13:35.336  [2024-12-09 11:03:52.025312] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.blk
00:13:35.336  [2024-12-09 11:03:52.025322] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk
00:13:35.336  [2024-12-09 11:03:52.025347] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started
00:13:35.336  [2024-12-09 11:03:52.025982] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1
00:13:35.336  [2024-12-09 11:03:52.026002] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0
00:13:35.336  [2024-12-09 11:03:52.026021] virtio_vfio_user.c:  65:virtio_vfio_user_get_status: *DEBUG*: device status 0
00:13:35.336  [2024-12-09 11:03:52.026983] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:35.336  [2024-12-09 11:03:52.026998] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0
00:13:35.336  [2024-12-09 11:03:52.027993] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:35.336  [2024-12-09 11:03:52.028007] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:35.336  [2024-12-09 11:03:52.028018] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 0
00:13:35.336  [2024-12-09 11:03:52.028026] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled
00:13:35.336  [2024-12-09 11:03:52.029000] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:35.336  [2024-12-09 11:03:52.029014] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1
00:13:35.336  [2024-12-09 11:03:52.030006] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:35.336  [2024-12-09 11:03:52.030020] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:35.336  [2024-12-09 11:03:52.030031] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 1
00:13:35.336  [2024-12-09 11:03:52.030038] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled
00:13:35.336  [2024-12-09 11:03:52.031007] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:35.336  [2024-12-09 11:03:52.031021] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2
00:13:35.336  [2024-12-09 11:03:52.032009] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:35.336  [2024-12-09 11:03:52.032023] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:35.336  [2024-12-09 11:03:52.032033] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 2
00:13:35.336  [2024-12-09 11:03:52.032040] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 2 isn't enabled
00:13:35.336  [2024-12-09 11:03:52.033017] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2
00:13:35.336  [2024-12-09 11:03:52.033035] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3
00:13:35.336  [2024-12-09 11:03:52.034031] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2
00:13:35.336  [2024-12-09 11:03:52.034061] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0
00:13:35.336  [2024-12-09 11:03:52.034091] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 3
00:13:35.336  [2024-12-09 11:03:52.034114] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 3 isn't enabled
00:13:35.336  [2024-12-09 11:03:52.034181] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.scsi
00:13:35.336  [2024-12-09 11:03:52.036861] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000
00:13:35.336  [2024-12-09 11:03:52.070361] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:13:35.336  [2024-12-09 11:03:52.070380] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:13:35.336  [2024-12-09 11:03:52.070391] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting
00:13:35.336  [2024-12-09 11:03:52.070410] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.scsi
00:13:35.336  [2024-12-09 11:03:52.070422] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi
00:13:35.336  [2024-12-09 11:03:52.070429] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started
00:13:39.527   11:03:55 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@53 -- # trap - SIGINT SIGTERM EXIT
00:13:39.527   11:03:55 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.blk
00:13:39.527  [2024-12-09 11:03:55.936064] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.blk
00:13:39.527   11:03:55 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.scsi
00:13:39.527  [2024-12-09 11:03:56.160909] tgt_endpoint.c: 701:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.scsi
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@59 -- # killprocess 192539
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 192539 ']'
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@958 -- # kill -0 192539
00:13:39.527    11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # uname
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:39.527    11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 192539
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 192539'
00:13:39.527  killing process with pid 192539
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@973 -- # kill 192539
00:13:39.527   11:03:56 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@978 -- # wait 192539
00:13:42.059  
00:13:42.059  real	0m42.474s
00:13:42.059  user	5m0.048s
00:13:42.059  sys	0m2.308s
00:13:42.059   11:03:58 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:42.059   11:03:58 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:13:42.059  ************************************
00:13:42.059  END TEST vfio_user_virtio_bdevperf
00:13:42.059  ************************************
00:13:42.059   11:03:58 vfio_user_qemu -- vfio_user/vfio_user.sh@20 -- # [[ y == y ]]
00:13:42.059   11:03:58 vfio_user_qemu -- vfio_user/vfio_user.sh@21 -- # run_test vfio_user_virtio_fs_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:13:42.059   11:03:58 vfio_user_qemu -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:42.059   11:03:58 vfio_user_qemu -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:42.059   11:03:58 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:13:42.059  ************************************
00:13:42.059  START TEST vfio_user_virtio_fs_fio
00:13:42.059  ************************************
00:13:42.059   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:13:42.059  * Looking for test storage...
00:13:42.059  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # lcov --version
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # IFS=.-:
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # read -ra ver1
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # IFS=.-:
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # read -ra ver2
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@338 -- # local 'op=<'
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@340 -- # ver1_l=2
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@341 -- # ver2_l=1
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@344 -- # case "$op" in
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@345 -- # : 1
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # decimal 1
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=1
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 1
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # ver1[v]=1
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # decimal 2
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=2
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:42.059     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 2
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # ver2[v]=2
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # return 0
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:42.059    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:42.059  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:42.059  		--rc genhtml_branch_coverage=1
00:13:42.059  		--rc genhtml_function_coverage=1
00:13:42.060  		--rc genhtml_legend=1
00:13:42.060  		--rc geninfo_all_blocks=1
00:13:42.060  		--rc geninfo_unexecuted_blocks=1
00:13:42.060  		
00:13:42.060  		'
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:42.060  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:42.060  		--rc genhtml_branch_coverage=1
00:13:42.060  		--rc genhtml_function_coverage=1
00:13:42.060  		--rc genhtml_legend=1
00:13:42.060  		--rc geninfo_all_blocks=1
00:13:42.060  		--rc geninfo_unexecuted_blocks=1
00:13:42.060  		
00:13:42.060  		'
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:42.060  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:42.060  		--rc genhtml_branch_coverage=1
00:13:42.060  		--rc genhtml_function_coverage=1
00:13:42.060  		--rc genhtml_legend=1
00:13:42.060  		--rc geninfo_all_blocks=1
00:13:42.060  		--rc geninfo_unexecuted_blocks=1
00:13:42.060  		
00:13:42.060  		'
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:42.060  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:42.060  		--rc genhtml_branch_coverage=1
00:13:42.060  		--rc genhtml_function_coverage=1
00:13:42.060  		--rc genhtml_legend=1
00:13:42.060  		--rc geninfo_all_blocks=1
00:13:42.060  		--rc geninfo_unexecuted_blocks=1
00:13:42.060  		
00:13:42.060  		'
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@6 -- # : 128
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@7 -- # : 512
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@6 -- # : false
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@7 -- # : /root/vhost_test
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@9 -- # : qemu-img
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:13:42.060       11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@2 -- # vhost_0_main_core=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:13:42.060     11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:13:42.060      11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:13:42.060       11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:13:42.060        11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # check_cgroup
00:13:42.060        11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:13:42.060        11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:13:42.060        11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # echo 2
00:13:42.060       11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]'
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # get_vhost_dir 0
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:42.060    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@16 -- # vhosttestinit
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@37 -- # '[' '' == iso ']'
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]]
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@18 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@20 -- # vfu_tgt_run 0
00:13:42.060   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@6 -- # local vhost_name=0
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py
00:13:42.061    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # get_vhost_dir 0
00:13:42.061    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:13:42.061    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:13:42.061    11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock'
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@17 -- # vfupid=199992
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@18 -- # echo 199992
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@20 -- # echo 'Process pid: 199992'
00:13:42.061  Process pid: 199992
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@21 -- # echo 'waiting for app to run...'
00:13:42.061  waiting for app to run...
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@22 -- # waitforlisten 199992 /root/vhost_test/vhost/0/rpc.sock
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@835 -- # '[' -z 199992 ']'
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@839 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...'
00:13:42.061  Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:42.061   11:03:58 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:42.061  [2024-12-09 11:03:59.052327] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:13:42.061  [2024-12-09 11:03:59.052439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199992 ]
00:13:42.319  EAL: No free 2048 kB hugepages reported on node 1
00:13:42.577  [2024-12-09 11:03:59.373508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:42.577  [2024-12-09 11:03:59.478636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:42.577  [2024-12-09 11:03:59.478708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:42.577  [2024-12-09 11:03:59.478748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:42.577  [2024-12-09 11:03:59.478768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@868 -- # return 0
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@22 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@23 -- # rm -rf /root/vhost_test/vms/vfu_tgt
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@24 -- # mkdir -p /root/vhost_test/vms/vfu_tgt
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@27 -- # disk_no=1
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@28 -- # vm_num=1
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@29 -- # job_file=default_fsdev.job
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@30 -- # be_virtiofs_dir=/tmp/vfio-test.1
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@31 -- # vm_virtiofs_dir=/tmp/virtiofs.1
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@35 -- # rm -rf /tmp/vfio-test.1
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@36 -- # mkdir -p /tmp/vfio-test.1
00:13:43.514    11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # mktemp --tmpdir=/tmp/vfio-test.1
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # tmpfile=/tmp/vfio-test.1/tmp.ukBDL1NVvO
00:13:43.514   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@41 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock fsdev_aio_create aio.1 /tmp/vfio-test.1
00:13:43.773  aio.1
00:13:43.773   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_fs_endpoint virtio.1 --fsdev-name aio.1 --tag vfu_test.1 --num-queues=2 --qsize=512 --packed-ring
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@45 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@518 -- # xtrace_disable
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:44.032  WARN: removing existing VM in '/root/vhost_test/vms/1'
00:13:44.032  INFO: Creating new VM in /root/vhost_test/vms/1
00:13:44.032  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:13:44.032  INFO: TASK MASK: 6-7
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@671 -- # local node_num=0
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@672 -- # local boot_disk_present=false
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:13:44.032  INFO: NUMA NODE: 0
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@677 -- # [[ -n '' ]]
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@686 -- # [[ -z '' ]]
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@691 -- # (( 1 == 0 ))
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@693 -- # (( 1 == 0 ))
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # IFS=,
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@701 -- # read -r disk disk_type _
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # [[ -z '' ]]
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@702 -- # disk_type=vfio_user_virtio
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@704 -- # case $disk_type in
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@766 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1'
00:13:44.032  INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@767 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk")
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@768 -- # [[ 1 == '' ]]
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # [[ -n '' ]]
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@785 -- # (( 0 ))
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh'
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh'
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh'
00:13:44.032  INFO: Saving to /root/vhost_test/vms/1/run.sh
00:13:44.032   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # cat
00:13:44.033    11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/1/run.sh
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@827 -- # echo 10100
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@828 -- # echo 10101
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@829 -- # echo 10102
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/1/migration_port
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@832 -- # [[ -z '' ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@834 -- # echo 10104
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@835 -- # echo 101
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@837 -- # [[ -z '' ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@838 -- # [[ -z '' ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@46 -- # vm_run 1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@843 -- # local run_all=false
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@844 -- # local vms_to_run=
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@846 -- # getopts a-: optchar
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@856 -- # false
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@859 -- # shift 0
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@860 -- # for vm in "$@"
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@861 -- # vm_num_is_valid 1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@866 -- # vms_to_run+=' 1'
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@871 -- # vm_is_running 1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/1/run.sh'
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh'
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh'
00:13:44.033  INFO: running /root/vhost_test/vms/1/run.sh
00:13:44.033   11:04:00 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@877 -- # /root/vhost_test/vms/1/run.sh
00:13:44.033  Running VM in /root/vhost_test/vms/1
00:13:44.599  [2024-12-09 11:04:01.316097] tgt_endpoint.c: 167:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully
00:13:44.599  Waiting for QEMU pid file
00:13:45.537  === qemu.log ===
00:13:45.537  === qemu.log ===
00:13:45.537   11:04:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@47 -- # vm_wait_for_boot 60 1
00:13:45.537   11:04:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@913 -- # assert_number 60
00:13:45.537   11:04:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # [[ 60 =~ [0-9]+ ]]
00:13:45.537   11:04:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@281 -- # return 0
00:13:45.537   11:04:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@915 -- # xtrace_disable
00:13:45.537   11:04:02 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:13:45.537  INFO: Waiting for VMs to boot
00:13:45.537  INFO: waiting for VM1 (/root/vhost_test/vms/1)
00:14:07.470  
00:14:07.470  INFO: VM1 ready
00:14:07.470  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.471  INFO: all VMs ready
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # return 0
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@49 -- # vm_exec 1 'mkdir /tmp/virtiofs.1'
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mkdir /tmp/virtiofs.1'
00:14:07.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@50 -- # vm_exec 1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.471    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:07.471   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1'
00:14:07.471  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # basename /tmp/vfio-test.1/tmp.ukBDL1NVvO
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # vm_exec 1 'ls /tmp/virtiofs.1/tmp.ukBDL1NVvO'
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.730    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:07.730   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'ls /tmp/virtiofs.1/tmp.ukBDL1NVvO'
00:14:07.730  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:07.989  /tmp/virtiofs.1/tmp.ukBDL1NVvO
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@53 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@977 -- # local OPTIND optchar
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@978 -- # local readonly=
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@979 -- # local fio_bin=
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@981 -- # case "$optchar" in
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@983 -- # case "$OPTARG" in
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@984 -- # local fio_bin=/usr/src/fio-static/fio
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@980 -- # getopts :-: optchar
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@993 -- # shift 1
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@994 -- # for vm_num in "$@"
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@995 -- # notice 'Starting fio server on VM1'
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1'
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1'
00:14:07.989  INFO: Starting fio server on VM1
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@996 -- # [[ /usr/src/fio-static/fio != '' ]]
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@997 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio'
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:07.989    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:07.989    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:07.989    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:07.989    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:07.989    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:07.989    11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:07.989   11:04:24 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio'
00:14:07.989  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@998 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.250    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.250    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.250    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.250    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.250    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.250    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.250   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid
00:14:08.250  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@54 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job --out=/root/vhost_test/fio_results --vm=1:/tmp/virtiofs.1/test
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1053 -- # local arg
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1054 -- # local job_file=
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1055 -- # local fio_bin=
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # vms=()
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # local vms
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1057 -- # local out=
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1058 -- # local vm
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # local run_server_mode=true
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # local run_plugin_mode=false
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1061 -- # local fio_start_cmd
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1062 -- # local fio_output_format=normal
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1063 -- # local fio_gtod_reduce=false
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1064 -- # local wait_for_fio=true
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1069 -- # local fio_bin=/usr/src/fio-static/fio
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1068 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1072 -- # local out=/root/vhost_test/fio_results
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1073 -- # mkdir -p /root/vhost_test/fio_results
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # for arg in "$@"
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1067 -- # case "$arg" in
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1070 -- # vms+=("${arg#*=}")
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ -n /usr/src/fio-static/fio ]]
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1092 -- # [[ ! -r /usr/src/fio-static/fio ]]
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1097 -- # [[ -z /usr/src/fio-static/fio ]]
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1101 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job ]]
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1106 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never '
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1108 -- # local job_fname
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # job_fname=default_fsdev.job
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1110 -- # log_fname=default_fsdev.log
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1111 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal '
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1114 -- # for vm in "${vms[@]}"
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1115 -- # local vm_num=1
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1116 -- # local vmdisks=/tmp/virtiofs.1/test
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1118 -- # sed 's@filename=@filename=/tmp/virtiofs.1/test@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1119 -- # vm_exec 1 'cat > /root/default_fsdev.job'
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.512    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.512   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_fsdev.job'
00:14:08.512  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # false
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1125 -- # vm_exec 1 cat /root/default_fsdev.job
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:08.771    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:08.771    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:08.771    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:08.771    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:08.771    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:08.771    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:08.771   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_fsdev.job
00:14:08.771  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:09.030  [global]
00:14:09.030  blocksize=4k
00:14:09.030  iodepth=512
00:14:09.030  ioengine=libaio
00:14:09.030  size=1G
00:14:09.030  group_reporting
00:14:09.030  thread
00:14:09.030  numjobs=1
00:14:09.030  direct=1
00:14:09.030  invalidate=1
00:14:09.030  rw=randrw
00:14:09.030  do_verify=1
00:14:09.030  filename=/tmp/virtiofs.1/test
00:14:09.030  [job0]
00:14:09.030   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1127 -- # true
00:14:09.030    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # vm_fio_socket 1
00:14:09.030    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@326 -- # vm_num_is_valid 1
00:14:09.030    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:09.030    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:09.030    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@327 -- # local vm_dir=/root/vhost_test/vms/1
00:14:09.030    11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # cat /root/vhost_test/vms/1/fio_socket
00:14:09.030   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1128 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_fsdev.job '
00:14:09.030   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1131 -- # true
00:14:09.030   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1147 -- # true
00:14:09.030   11:04:25 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1161 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_fsdev.job
00:14:30.962   11:04:44 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1162 -- # sleep 1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1164 -- # [[ normal == \j\s\o\n ]]
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1172 -- # [[ ! -n '' ]]
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1173 -- # cat /root/vhost_test/fio_results/default_fsdev.log
00:14:30.962  hostname=vhostfedora-cloud-23052, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1
00:14:30.962  <vhostfedora-cloud-23052> job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512
00:14:30.962  <vhostfedora-cloud-23052> Starting 1 thread
00:14:30.962  <vhostfedora-cloud-23052> job0: Laying out IO file (1 file / 1024MiB)
00:14:30.962  <vhostfedora-cloud-23052> 
00:14:30.962  job0: (groupid=0, jobs=1): err= 0: pid=975: Mon Dec  9 11:04:44 2024
00:14:30.962    read: IOPS=34.1k, BW=133MiB/s (140MB/s)(512MiB/3838msec)
00:14:30.962      slat (usec): min=2, max=131, avg= 5.06, stdev= 2.90
00:14:30.962      clat (usec): min=3287, max=14512, avg=7543.37, stdev=293.64
00:14:30.962       lat (usec): min=3291, max=14518, avg=7548.43, stdev=293.67
00:14:30.962      clat percentiles (usec):
00:14:30.962       |  1.00th=[ 7242],  5.00th=[ 7308], 10.00th=[ 7373], 20.00th=[ 7439],
00:14:30.962       | 30.00th=[ 7439], 40.00th=[ 7504], 50.00th=[ 7504], 60.00th=[ 7570],
00:14:30.962       | 70.00th=[ 7570], 80.00th=[ 7635], 90.00th=[ 7701], 95.00th=[ 7832],
00:14:30.962       | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[10814], 99.95th=[12911],
00:14:30.962       | 99.99th=[14353]
00:14:30.962     bw (  KiB/s): min=134736, max=137680, per=100.00%, avg=136761.14, stdev=1161.56, samples=7
00:14:30.962     iops        : min=33684, max=34420, avg=34190.29, stdev=290.39, samples=7
00:14:30.962    write: IOPS=34.2k, BW=133MiB/s (140MB/s)(512MiB/3838msec); 0 zone resets
00:14:30.962      slat (nsec): min=3241, max=85858, avg=5811.51, stdev=3059.05
00:14:30.962      clat (usec): min=3172, max=14520, avg=7426.59, stdev=293.79
00:14:30.962       lat (usec): min=3178, max=14527, avg=7432.40, stdev=293.84
00:14:30.962      clat percentiles (usec):
00:14:30.962       |  1.00th=[ 7111],  5.00th=[ 7242], 10.00th=[ 7242], 20.00th=[ 7308],
00:14:30.962       | 30.00th=[ 7373], 40.00th=[ 7373], 50.00th=[ 7439], 60.00th=[ 7439],
00:14:30.962       | 70.00th=[ 7504], 80.00th=[ 7504], 90.00th=[ 7570], 95.00th=[ 7701],
00:14:30.962       | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[10683], 99.95th=[12649],
00:14:30.962       | 99.99th=[14353]
00:14:30.962     bw (  KiB/s): min=134576, max=138016, per=99.93%, avg=136547.43, stdev=1163.84, samples=7
00:14:30.962     iops        : min=33644, max=34504, avg=34136.86, stdev=290.96, samples=7
00:14:30.962    lat (msec)   : 4=0.15%, 10=99.73%, 20=0.12%
00:14:30.962    cpu          : usr=15.85%, sys=38.21%, ctx=8192, majf=0, minf=7
00:14:30.962    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:14:30.962       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:30.962       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:30.962       issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:30.962       latency   : target=0, window=0, percentile=100.00%, depth=512
00:14:30.962  
00:14:30.962  Run status group 0 (all jobs):
00:14:30.962     READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=512MiB (537MB), run=3838-3838msec
00:14:30.962    WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=512MiB (537MB), run=3838-3838msec
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@55 -- # vm_exec 1 'umount /tmp/virtiofs.1'
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'umount /tmp/virtiofs.1'
00:14:30.962  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@58 -- # notice 'Shutting down virtual machine...'
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...'
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...'
00:14:30.962  INFO: Shutting down virtual machine...
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@59 -- # vm_shutdown_all
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@487 -- # local timeo=90 vms vm
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vms=($(vm_list_all))
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # vm_list_all
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # vms=()
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@466 -- # local vms
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:30.962    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # for vm in "${vms[@]}"
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@492 -- # vm_shutdown 1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@417 -- # vm_num_is_valid 1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@418 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@419 -- # [[ ! -d /root/vhost_test/vms/1 ]]
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@424 -- # vm_is_running 1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:30.962   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=200457
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 200457
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@431 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1'
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1'
00:14:30.963  INFO: Shutting down virtual machine /root/vhost_test/vms/1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@432 -- # set +e
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@433 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@336 -- # vm_num_is_valid 1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@338 -- # local vm_num=1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@339 -- # shift
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # vm_ssh_socket 1
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/ssh_socket
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\'''
00:14:30.963  Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts.
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@434 -- # notice 'VM1 is shutting down - wait a while to complete'
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete'
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete'
00:14:30.963  INFO: VM1 is shutting down - wait a while to complete
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@435 -- # set -e
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@495 -- # notice 'Waiting for VMs to shutdown...'
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...'
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...'
00:14:30.963  INFO: Waiting for VMs to shutdown...
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:14:30.963    11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=200457
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 200457
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:14:30.963   11:04:45 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@376 -- # local vm_pid
00:14:30.963    11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # cat /root/vhost_test/vms/1/qemu.pid
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@377 -- # vm_pid=200457
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@379 -- # /bin/kill -0 200457
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@380 -- # return 0
00:14:30.963   11:04:46 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 1 > 0 ))
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # for vm in "${!vms[@]}"
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # vm_is_running 1
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # vm_num_is_valid 1
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@309 -- # return 0
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/1
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 1
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # unset -v 'vms[vm]'
00:14:30.963   11:04:47 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@500 -- # sleep 1
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( timeo-- > 0 && 0 > 0 ))
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@503 -- # (( 0 == 0 ))
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@504 -- # notice 'All VMs successfully shut down'
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down'
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down'
00:14:31.898  INFO: All VMs successfully shut down
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@505 -- # return 0
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@61 -- # vhost_kill 0
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@202 -- # local rc=0
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@203 -- # local vhost_name=0
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@205 -- # [[ -z 0 ]]
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@210 -- # local vhost_dir
00:14:31.898    11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # get_vhost_dir 0
00:14:31.898    11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0
00:14:31.898    11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]]
00:14:31.898    11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]]
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@220 -- # local vhost_pid
00:14:31.898    11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # vhost_pid=199992
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 199992) app'
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 199992) app'
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 199992) app'
00:14:31.898  INFO: killing vhost (PID 199992) app
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@224 -- # kill -INT 199992
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out
00:14:31.898   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out=
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit'
00:14:31.899  INFO: sent SIGINT to vhost app - waiting 60 seconds to exit
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i = 0 ))
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 199992
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:14:31.899  .
00:14:31.899   11:04:48 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:14:32.832  [2024-12-09 11:04:49.761469] vfu_virtio_fs.c: 301:_vfu_virtio_fs_fuse_dispatcher_delete_cpl: *NOTICE*: FUSE dispatcher deleted
00:14:33.090   11:04:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:14:33.090   11:04:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:33.090   11:04:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 199992
00:14:33.090   11:04:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:14:33.090  .
00:14:33.090   11:04:49 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:14:34.027   11:04:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:14:34.027   11:04:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:34.027   11:04:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 199992
00:14:34.027   11:04:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo .
00:14:34.027  .
00:14:34.027   11:04:50 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ ))
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 ))
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 199992
00:14:34.965  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (199992) - No such process
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@231 -- # break
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@234 -- # kill -0 199992
00:14:34.965  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (199992) - No such process
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@239 -- # kill -0 199992
00:14:34.965  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (199992) - No such process
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@245 -- # is_pid_child 199992
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1686 -- # local pid=199992 _pid
00:14:34.965    11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1685 -- # jobs -pr
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1689 -- # (( pid == _pid ))
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1688 -- # read -r _pid
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1692 -- # return 1
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@257 -- # timing_exit vhost_kill
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@259 -- # rm -rf /root/vhost_test/vhost/0
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@261 -- # return 0
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@63 -- # vhosttestfini
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@54 -- # '[' '' == iso ']'
00:14:34.965  
00:14:34.965  real	0m53.135s
00:14:34.965  user	3m23.876s
00:14:34.965  sys	0m2.846s
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:34.965   11:04:51 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x
00:14:34.965  ************************************
00:14:34.965  END TEST vfio_user_virtio_fs_fio
00:14:34.965  ************************************
00:14:34.965   11:04:51 vfio_user_qemu -- vfio_user/vfio_user.sh@26 -- # vhosttestfini
00:14:34.965   11:04:51 vfio_user_qemu -- vhost/common.sh@54 -- # '[' iso == iso ']'
00:14:34.965   11:04:51 vfio_user_qemu -- vhost/common.sh@55 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset
00:14:35.910  Waiting for block devices as requested
00:14:36.170  0000:00:04.7 (8086 6f27): vfio-pci -> ioatdma
00:14:36.170  0000:00:04.6 (8086 6f26): vfio-pci -> ioatdma
00:14:36.170  0000:00:04.5 (8086 6f25): vfio-pci -> ioatdma
00:14:36.428  0000:00:04.4 (8086 6f24): vfio-pci -> ioatdma
00:14:36.428  0000:00:04.3 (8086 6f23): vfio-pci -> ioatdma
00:14:36.428  0000:00:04.2 (8086 6f22): vfio-pci -> ioatdma
00:14:36.428  0000:00:04.1 (8086 6f21): vfio-pci -> ioatdma
00:14:36.687  0000:00:04.0 (8086 6f20): vfio-pci -> ioatdma
00:14:36.688  0000:80:04.7 (8086 6f27): vfio-pci -> ioatdma
00:14:36.688  0000:80:04.6 (8086 6f26): vfio-pci -> ioatdma
00:14:36.688  0000:80:04.5 (8086 6f25): vfio-pci -> ioatdma
00:14:36.947  0000:80:04.4 (8086 6f24): vfio-pci -> ioatdma
00:14:36.947  0000:80:04.3 (8086 6f23): vfio-pci -> ioatdma
00:14:36.947  0000:80:04.2 (8086 6f22): vfio-pci -> ioatdma
00:14:36.947  0000:80:04.1 (8086 6f21): vfio-pci -> ioatdma
00:14:37.206  0000:80:04.0 (8086 6f20): vfio-pci -> ioatdma
00:14:37.206  0000:0d:00.0 (8086 0a54): vfio-pci -> nvme
00:14:37.465  
00:14:37.465  real	7m38.524s
00:14:37.465  user	31m14.585s
00:14:37.465  sys	0m17.669s
00:14:37.465   11:04:54 vfio_user_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:37.465   11:04:54 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:37.465  ************************************
00:14:37.465  END TEST vfio_user_qemu
00:14:37.465  ************************************
00:14:37.466   11:04:54  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:14:37.466   11:04:54  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:14:37.466   11:04:54  -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]]
00:14:37.466   11:04:54  -- spdk/autotest.sh@371 -- # run_test sma /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:14:37.466   11:04:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:37.466   11:04:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:37.466   11:04:54  -- common/autotest_common.sh@10 -- # set +x
00:14:37.466  ************************************
00:14:37.466  START TEST sma
00:14:37.466  ************************************
00:14:37.466   11:04:54 sma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh
00:14:37.466  * Looking for test storage...
00:14:37.466  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:37.466     11:04:54 sma -- common/autotest_common.sh@1711 -- # lcov --version
00:14:37.466     11:04:54 sma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:37.466    11:04:54 sma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:37.466    11:04:54 sma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:37.466    11:04:54 sma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:37.466    11:04:54 sma -- scripts/common.sh@336 -- # IFS=.-:
00:14:37.466    11:04:54 sma -- scripts/common.sh@336 -- # read -ra ver1
00:14:37.466    11:04:54 sma -- scripts/common.sh@337 -- # IFS=.-:
00:14:37.466    11:04:54 sma -- scripts/common.sh@337 -- # read -ra ver2
00:14:37.466    11:04:54 sma -- scripts/common.sh@338 -- # local 'op=<'
00:14:37.466    11:04:54 sma -- scripts/common.sh@340 -- # ver1_l=2
00:14:37.466    11:04:54 sma -- scripts/common.sh@341 -- # ver2_l=1
00:14:37.466    11:04:54 sma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:37.466    11:04:54 sma -- scripts/common.sh@344 -- # case "$op" in
00:14:37.466    11:04:54 sma -- scripts/common.sh@345 -- # : 1
00:14:37.466    11:04:54 sma -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:37.466    11:04:54 sma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:37.466     11:04:54 sma -- scripts/common.sh@365 -- # decimal 1
00:14:37.466     11:04:54 sma -- scripts/common.sh@353 -- # local d=1
00:14:37.466     11:04:54 sma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:37.466     11:04:54 sma -- scripts/common.sh@355 -- # echo 1
00:14:37.466    11:04:54 sma -- scripts/common.sh@365 -- # ver1[v]=1
00:14:37.466     11:04:54 sma -- scripts/common.sh@366 -- # decimal 2
00:14:37.466     11:04:54 sma -- scripts/common.sh@353 -- # local d=2
00:14:37.466     11:04:54 sma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:37.466     11:04:54 sma -- scripts/common.sh@355 -- # echo 2
00:14:37.466    11:04:54 sma -- scripts/common.sh@366 -- # ver2[v]=2
00:14:37.466    11:04:54 sma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:37.466    11:04:54 sma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:37.466    11:04:54 sma -- scripts/common.sh@368 -- # return 0
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:37.466  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.466  		--rc genhtml_branch_coverage=1
00:14:37.466  		--rc genhtml_function_coverage=1
00:14:37.466  		--rc genhtml_legend=1
00:14:37.466  		--rc geninfo_all_blocks=1
00:14:37.466  		--rc geninfo_unexecuted_blocks=1
00:14:37.466  		
00:14:37.466  		'
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:37.466  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.466  		--rc genhtml_branch_coverage=1
00:14:37.466  		--rc genhtml_function_coverage=1
00:14:37.466  		--rc genhtml_legend=1
00:14:37.466  		--rc geninfo_all_blocks=1
00:14:37.466  		--rc geninfo_unexecuted_blocks=1
00:14:37.466  		
00:14:37.466  		'
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:37.466  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.466  		--rc genhtml_branch_coverage=1
00:14:37.466  		--rc genhtml_function_coverage=1
00:14:37.466  		--rc genhtml_legend=1
00:14:37.466  		--rc geninfo_all_blocks=1
00:14:37.466  		--rc geninfo_unexecuted_blocks=1
00:14:37.466  		
00:14:37.466  		'
00:14:37.466    11:04:54 sma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:37.466  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.466  		--rc genhtml_branch_coverage=1
00:14:37.466  		--rc genhtml_function_coverage=1
00:14:37.466  		--rc genhtml_legend=1
00:14:37.466  		--rc geninfo_all_blocks=1
00:14:37.466  		--rc geninfo_unexecuted_blocks=1
00:14:37.466  		
00:14:37.466  		'
00:14:37.466   11:04:54 sma -- sma/sma.sh@11 -- # run_test sma_nvmf_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:14:37.466   11:04:54 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:37.466   11:04:54 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:37.466   11:04:54 sma -- common/autotest_common.sh@10 -- # set +x
00:14:37.466  ************************************
00:14:37.466  START TEST sma_nvmf_tcp
00:14:37.466  ************************************
00:14:37.466   11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh
00:14:37.466  * Looking for test storage...
00:14:37.466  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:37.466    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:37.466     11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:14:37.466     11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:37.726     11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.726  		--rc genhtml_branch_coverage=1
00:14:37.726  		--rc genhtml_function_coverage=1
00:14:37.726  		--rc genhtml_legend=1
00:14:37.726  		--rc geninfo_all_blocks=1
00:14:37.726  		--rc geninfo_unexecuted_blocks=1
00:14:37.726  		
00:14:37.726  		'
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.726  		--rc genhtml_branch_coverage=1
00:14:37.726  		--rc genhtml_function_coverage=1
00:14:37.726  		--rc genhtml_legend=1
00:14:37.726  		--rc geninfo_all_blocks=1
00:14:37.726  		--rc geninfo_unexecuted_blocks=1
00:14:37.726  		
00:14:37.726  		'
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.726  		--rc genhtml_branch_coverage=1
00:14:37.726  		--rc genhtml_function_coverage=1
00:14:37.726  		--rc genhtml_legend=1
00:14:37.726  		--rc geninfo_all_blocks=1
00:14:37.726  		--rc geninfo_unexecuted_blocks=1
00:14:37.726  		
00:14:37.726  		'
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:37.726  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:37.726  		--rc genhtml_branch_coverage=1
00:14:37.726  		--rc genhtml_function_coverage=1
00:14:37.726  		--rc genhtml_legend=1
00:14:37.726  		--rc geninfo_all_blocks=1
00:14:37.726  		--rc geninfo_unexecuted_blocks=1
00:14:37.726  		
00:14:37.726  		'
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@70 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@73 -- # tgtpid=210608
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@72 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@83 -- # smapid=210609
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@86 -- # sma_waitforlisten
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:14:37.726    11:04:54 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # cat
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/common.sh@8 -- # local sma_port=8080
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i = 0 ))
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:37.726   11:04:54 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:14:37.726  [2024-12-09 11:04:54.618178] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:14:37.726  [2024-12-09 11:04:54.618296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210608 ]
00:14:37.726  EAL: No free 2048 kB hugepages reported on node 1
00:14:37.985  [2024-12-09 11:04:54.745585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:37.985  [2024-12-09 11:04:54.855399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:38.925  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:38.925  I0000 00:00:1733738695.565642  210609 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:38.925   11:04:55 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:14:38.925   11:04:55 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:14:38.925   11:04:55 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:38.925   11:04:55 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s
00:14:38.925  [2024-12-09 11:04:55.665586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ ))
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 ))
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- sma/common.sh@12 -- # return 0
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@89 -- # rpc_cmd bdev_null_create null0 100 4096
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.860  null0
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@92 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:39.860  [
00:14:39.860  {
00:14:39.860  "trtype": "TCP",
00:14:39.860  "max_queue_depth": 128,
00:14:39.860  "max_io_qpairs_per_ctrlr": 127,
00:14:39.860  "in_capsule_data_size": 4096,
00:14:39.860  "max_io_size": 131072,
00:14:39.860  "io_unit_size": 131072,
00:14:39.860  "max_aq_depth": 128,
00:14:39.860  "num_shared_buffers": 511,
00:14:39.860  "buf_cache_size": 4294967295,
00:14:39.860  "dif_insert_or_strip": false,
00:14:39.860  "zcopy": false,
00:14:39.860  "c2h_success": true,
00:14:39.860  "sock_priority": 0,
00:14:39.860  "abort_timeout_sec": 1,
00:14:39.860  "ack_timeout": 0,
00:14:39.860  "data_wr_pool_size": 0
00:14:39.860  }
00:14:39.860  ]
00:14:39.860   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:39.860    11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # create_device nqn.2016-06.io.spdk:cnode0
00:14:39.860    11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:39.860    11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # jq -r .handle
00:14:39.860  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:39.861  I0000 00:00:1733738696.857716  211050 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:39.861  I0000 00:00:1733738696.859257  211050 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.119  I0000 00:00:1733738696.873644  211051 subchannel.cc:806] subchannel 0x56497d53ab20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56497d525840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56497d63f380, grpc.internal.client_channel_call_destination=0x7f0e7c45e390, grpc.internal.event_engine=0x56497d456ca0, grpc.internal.security_connector=0x56497d53d850, grpc.internal.subchannel_pool=0x56497d53d6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56497d384770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:56.873231495+01:00"}), backing off for 1000 ms
00:14:40.119  [2024-12-09 11:04:56.893302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:14:40.119   11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:40.119   11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@96 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:40.119   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.119   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.119  [
00:14:40.119  {
00:14:40.119  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:14:40.119  "subtype": "NVMe",
00:14:40.119  "listen_addresses": [
00:14:40.119  {
00:14:40.119  "trtype": "TCP",
00:14:40.119  "adrfam": "IPv4",
00:14:40.119  "traddr": "127.0.0.1",
00:14:40.119  "trsvcid": "4420"
00:14:40.119  }
00:14:40.119  ],
00:14:40.119  "allow_any_host": false,
00:14:40.119  "hosts": [],
00:14:40.119  "serial_number": "00000000000000000000",
00:14:40.119  "model_number": "SPDK bdev Controller",
00:14:40.119  "max_namespaces": 32,
00:14:40.119  "min_cntlid": 1,
00:14:40.119  "max_cntlid": 65519,
00:14:40.119  "namespaces": []
00:14:40.119  }
00:14:40.119  ]
00:14:40.119   11:04:56 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.119    11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # create_device nqn.2016-06.io.spdk:cnode1
00:14:40.119    11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # jq -r .handle
00:14:40.119    11:04:56 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.119  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.119  I0000 00:00:1733738697.126216  211075 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.119  I0000 00:00:1733738697.128167  211075 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.378  I0000 00:00:1733738697.129562  211081 subchannel.cc:806] subchannel 0x55bfff797b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55bfff782840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55bfff89c380, grpc.internal.client_channel_call_destination=0x7fc89d160390, grpc.internal.event_engine=0x55bfff6b3ca0, grpc.internal.security_connector=0x55bfff79a850, grpc.internal.subchannel_pool=0x55bfff79a6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55bfff5e1770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:57.129028112+01:00"}), backing off for 1000 ms
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@99 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.378  [
00:14:40.378  {
00:14:40.378  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:14:40.378  "subtype": "NVMe",
00:14:40.378  "listen_addresses": [
00:14:40.378  {
00:14:40.378  "trtype": "TCP",
00:14:40.378  "adrfam": "IPv4",
00:14:40.378  "traddr": "127.0.0.1",
00:14:40.378  "trsvcid": "4420"
00:14:40.378  }
00:14:40.378  ],
00:14:40.378  "allow_any_host": false,
00:14:40.378  "hosts": [],
00:14:40.378  "serial_number": "00000000000000000000",
00:14:40.378  "model_number": "SPDK bdev Controller",
00:14:40.378  "max_namespaces": 32,
00:14:40.378  "min_cntlid": 1,
00:14:40.378  "max_cntlid": 65519,
00:14:40.378  "namespaces": []
00:14:40.378  }
00:14:40.378  ]
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@100 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.378  [
00:14:40.378  {
00:14:40.378  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:14:40.378  "subtype": "NVMe",
00:14:40.378  "listen_addresses": [
00:14:40.378  {
00:14:40.378  "trtype": "TCP",
00:14:40.378  "adrfam": "IPv4",
00:14:40.378  "traddr": "127.0.0.1",
00:14:40.378  "trsvcid": "4420"
00:14:40.378  }
00:14:40.378  ],
00:14:40.378  "allow_any_host": false,
00:14:40.378  "hosts": [],
00:14:40.378  "serial_number": "00000000000000000000",
00:14:40.378  "model_number": "SPDK bdev Controller",
00:14:40.378  "max_namespaces": 32,
00:14:40.378  "min_cntlid": 1,
00:14:40.378  "max_cntlid": 65519,
00:14:40.378  "namespaces": []
00:14:40.378  }
00:14:40.378  ]
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@101 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 != \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # rpc_cmd nvmf_get_subsystems
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # jq -r '. | length'
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.378   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # [[ 3 -eq 3 ]]
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # create_device nqn.2016-06.io.spdk:cnode0
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.378    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # jq -r .handle
00:14:40.638  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.638  I0000 00:00:1733738697.435013  211220 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.638  I0000 00:00:1733738697.436660  211220 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.638  I0000 00:00:1733738697.438003  211308 subchannel.cc:806] subchannel 0x55c79a40fb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c79a3fa840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c79a514380, grpc.internal.client_channel_call_destination=0x7f3458636390, grpc.internal.event_engine=0x55c79a32bca0, grpc.internal.security_connector=0x55c79a412850, grpc.internal.subchannel_pool=0x55c79a4126b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c79a259770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:57.437475788+01:00"}), backing off for 1000 ms
00:14:40.638   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # tmp0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:40.638    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # create_device nqn.2016-06.io.spdk:cnode1
00:14:40.638    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # jq -r .handle
00:14:40.638    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:40.897  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:40.897  I0000 00:00:1733738697.670837  211331 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:40.897  I0000 00:00:1733738697.672630  211331 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:40.897  I0000 00:00:1733738697.673938  211335 subchannel.cc:806] subchannel 0x55b3a9002b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b3a8fed840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b3a9107380, grpc.internal.client_channel_call_destination=0x7f78dffb5390, grpc.internal.event_engine=0x55b3a8f1eca0, grpc.internal.security_connector=0x55b3a9005850, grpc.internal.subchannel_pool=0x55b3a90056b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b3a8e4c770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:57.673408107+01:00"}), backing off for 1000 ms
00:14:40.897   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # tmp1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:40.897    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # rpc_cmd nvmf_get_subsystems
00:14:40.897    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.897    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:40.897    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # jq -r '. | length'
00:14:40.897    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.897   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # [[ 3 -eq 3 ]]
00:14:40.897   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@112 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:14:40.897   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@113 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode1 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:14:40.897   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@116 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:40.897   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.156  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.156  I0000 00:00:1733738697.940289  211358 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.156  I0000 00:00:1733738697.941987  211358 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.156  I0000 00:00:1733738697.943372  211359 subchannel.cc:806] subchannel 0x558a6d4c8b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558a6d4b3840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558a6d5cd380, grpc.internal.client_channel_call_destination=0x7f0b0d708390, grpc.internal.event_engine=0x558a6d3e4ca0, grpc.internal.security_connector=0x558a6d4d2df0, grpc.internal.subchannel_pool=0x558a6d4cb6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558a6d312770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:57.94290915+01:00"}), backing off for 999 ms
00:14:41.156  {}
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@117 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.156    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.156  [2024-12-09 11:04:57.988038] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0' does not exist
00:14:41.156  request:
00:14:41.156  {
00:14:41.156  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:14:41.156  "method": "nvmf_get_subsystems",
00:14:41.156  "req_id": 1
00:14:41.156  }
00:14:41.156  Got JSON-RPC error response
00:14:41.156  response:
00:14:41.156  {
00:14:41.156  "code": -19,
00:14:41.156  "message": "No such device"
00:14:41.156  }
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:41.156   11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:41.156    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # rpc_cmd nvmf_get_subsystems
00:14:41.156    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.156    11:04:57 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.156    11:04:57 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # jq -r '. | length'
00:14:41.156    11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.156   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # [[ 2 -eq 2 ]]
00:14:41.157   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@120 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:41.157   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.415  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.415  I0000 00:00:1733738698.214528  211383 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.415  I0000 00:00:1733738698.216019  211383 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.415  I0000 00:00:1733738698.217192  211384 subchannel.cc:806] subchannel 0x556556908b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5565568f3840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556556a0d380, grpc.internal.client_channel_call_destination=0x7f79dad35390, grpc.internal.event_engine=0x556556824ca0, grpc.internal.security_connector=0x556556912df0, grpc.internal.subchannel_pool=0x55655690b6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556556752770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:58.216730259+01:00"}), backing off for 999 ms
00:14:41.415  {}
00:14:41.415   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@121 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:41.415   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # local es=0
00:14:41.415   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:41.415   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:14:41.415   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.415    11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.416  [2024-12-09 11:04:58.256812] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode1' does not exist
00:14:41.416  request:
00:14:41.416  {
00:14:41.416  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:14:41.416  "method": "nvmf_get_subsystems",
00:14:41.416  "req_id": 1
00:14:41.416  }
00:14:41.416  Got JSON-RPC error response
00:14:41.416  response:
00:14:41.416  {
00:14:41.416  "code": -19,
00:14:41.416  "message": "No such device"
00:14:41.416  }
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@655 -- # es=1
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:14:41.416    11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # rpc_cmd nvmf_get_subsystems
00:14:41.416    11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # jq -r '. | length'
00:14:41.416    11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:41.416    11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:41.416    11:04:58 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # [[ 1 -eq 1 ]]
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@125 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:41.416   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.674  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.674  I0000 00:00:1733738698.520748  211413 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.674  I0000 00:00:1733738698.522572  211413 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.674  I0000 00:00:1733738698.523717  211610 subchannel.cc:806] subchannel 0x55be2c7e7b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55be2c7d2840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55be2c8ec380, grpc.internal.client_channel_call_destination=0x7fcd42acf390, grpc.internal.event_engine=0x55be2c703ca0, grpc.internal.security_connector=0x55be2c7f1df0, grpc.internal.subchannel_pool=0x55be2c7ea6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55be2c631770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:58.523327793+01:00"}), backing off for 1000 ms
00:14:41.674  {}
00:14:41.674   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@126 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:41.674   11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.933  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:41.933  I0000 00:00:1733738698.751825  211630 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:41.933  I0000 00:00:1733738698.753364  211630 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:41.933  I0000 00:00:1733738698.754527  211635 subchannel.cc:806] subchannel 0x558b8d0deb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558b8d0c9840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558b8d1e3380, grpc.internal.client_channel_call_destination=0x7fa362404390, grpc.internal.event_engine=0x558b8cffaca0, grpc.internal.security_connector=0x558b8d0e8df0, grpc.internal.subchannel_pool=0x558b8d0e16b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558b8cf28770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:58.754072098+01:00"}), backing off for 1000 ms
00:14:41.933  {}
00:14:41.933    11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # create_device nqn.2016-06.io.spdk:cnode0
00:14:41.933    11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:41.933    11:04:58 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # jq -r .handle
00:14:42.192  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.192  I0000 00:00:1733738698.980250  211658 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.192  I0000 00:00:1733738698.982028  211658 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.192  I0000 00:00:1733738698.983325  211659 subchannel.cc:806] subchannel 0x5628f1b7ab20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5628f1b65840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5628f1c7f380, grpc.internal.client_channel_call_destination=0x7fa04396c390, grpc.internal.event_engine=0x5628f1a96ca0, grpc.internal.security_connector=0x5628f1b7d850, grpc.internal.subchannel_pool=0x5628f1b7d6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5628f19c4770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:58.982827311+01:00"}), backing off for 999 ms
00:14:42.192  [2024-12-09 11:04:59.003251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:14:42.192   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:14:42.192    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # create_device nqn.2016-06.io.spdk:cnode1
00:14:42.192    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.192    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # jq -r .handle
00:14:42.450  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.450  I0000 00:00:1733738699.229506  211682 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.450  I0000 00:00:1733738699.231142  211682 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.450  I0000 00:00:1733738699.232381  211684 subchannel.cc:806] subchannel 0x55555cfa7b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55555cf92840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55555d0ac380, grpc.internal.client_channel_call_destination=0x7fb18f023390, grpc.internal.event_engine=0x55555cec3ca0, grpc.internal.security_connector=0x55555cfaa850, grpc.internal.subchannel_pool=0x55555cfaa6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55555cdf1770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:59.231956886+01:00"}), backing off for 1000 ms
00:14:42.450   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # rpc_cmd bdev_get_bdevs -b null0
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # jq -r '.[].uuid'
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.450   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # uuid=6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:42.450   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@134 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:42.450   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:42.450    11:04:59 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:42.709  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:42.709  I0000 00:00:1733738699.611359  211708 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:42.709  I0000 00:00:1733738699.613017  211708 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:42.709  I0000 00:00:1733738699.614348  211829 subchannel.cc:806] subchannel 0x5565a9069b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5565a9054840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5565a916e380, grpc.internal.client_channel_call_destination=0x7fc0946f1390, grpc.internal.event_engine=0x5565a8f85ca0, grpc.internal.security_connector=0x5565a906c850, grpc.internal.subchannel_pool=0x5565a906c6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5565a8eb3770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:04:59.613885096+01:00"}), backing off for 999 ms
00:14:42.709  {}
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # jq -r '.[0].namespaces | length'
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.709   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # [[ 1 -eq 1 ]]
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # jq -r '.[0].namespaces | length'
00:14:42.709    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.967   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # [[ 0 -eq 0 ]]
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # jq -r '.[0].namespaces[0].uuid'
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.967   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # [[ 6c682a8f-a6ba-4a03-8746-ea12bc295d44 == \6\c\6\8\2\a\8\f\-\a\6\b\a\-\4\a\0\3\-\8\7\4\6\-\e\a\1\2\b\c\2\9\5\d\4\4 ]]
00:14:42.967   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@140 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:42.967   11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:42.967    11:04:59 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:43.227  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:43.227  I0000 00:00:1733738700.001281  211940 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:43.227  I0000 00:00:1733738700.003079  211940 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:43.227  I0000 00:00:1733738700.004439  211943 subchannel.cc:806] subchannel 0x55f479393b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f47937e840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f479498380, grpc.internal.client_channel_call_destination=0x7f3791083390, grpc.internal.event_engine=0x55f4792afca0, grpc.internal.security_connector=0x55f479396850, grpc.internal.subchannel_pool=0x55f4793966b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f4791dd770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:05:00.003921761+01:00"}), backing off for 999 ms
00:14:43.227  {}
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # jq -r '.[0].namespaces | length'
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.227   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # [[ 1 -eq 1 ]]
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # jq -r '.[0].namespaces | length'
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.227   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # [[ 0 -eq 0 ]]
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # jq -r '.[0].namespaces[0].uuid'
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.227   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # [[ 6c682a8f-a6ba-4a03-8746-ea12bc295d44 == \6\c\6\8\2\a\8\f\-\a\6\b\a\-\4\a\0\3\-\8\7\4\6\-\e\a\1\2\b\c\2\9\5\d\4\4 ]]
00:14:43.227   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@146 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:43.227   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:43.227    11:05:00 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:43.486  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:43.486  I0000 00:00:1733738700.426552  211972 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:43.486  I0000 00:00:1733738700.428351  211972 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:43.486  I0000 00:00:1733738700.429776  211981 subchannel.cc:806] subchannel 0x55a9053deb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a9053c9840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a9054e3380, grpc.internal.client_channel_call_destination=0x7fbd4cc44390, grpc.internal.event_engine=0x55a9052faca0, grpc.internal.security_connector=0x55a9053e1850, grpc.internal.subchannel_pool=0x55a9053e16b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a905228770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:05:00.429270385+01:00"}), backing off for 1000 ms
00:14:43.486  {}
00:14:43.486    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:14:43.486    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # jq -r '.[0].namespaces | length'
00:14:43.486    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.486    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:43.486    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.746   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # [[ 0 -eq 0 ]]
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # jq -r '.[0].namespaces | length'
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:43.746   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # [[ 0 -eq 0 ]]
00:14:43.746   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@151 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:43.746   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 6c682a8f-a6ba-4a03-8746-ea12bc295d44
00:14:43.746    11:05:00 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python
00:14:44.005  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:14:44.005  I0000 00:00:1733738700.826423  212016 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:14:44.005  I0000 00:00:1733738700.828165  212016 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:14:44.005  I0000 00:00:1733738700.829602  212135 subchannel.cc:806] subchannel 0x560f91638b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560f91623840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560f9173d380, grpc.internal.client_channel_call_destination=0x7faf59283390, grpc.internal.event_engine=0x560f91554ca0, grpc.internal.security_connector=0x560f9163b850, grpc.internal.subchannel_pool=0x560f9163b6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560f91482770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:05:00.829068682+01:00"}), backing off for 1000 ms
00:14:44.005  {}
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@153 -- # cleanup
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@13 -- # killprocess 210608
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 210608 ']'
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 210608
00:14:44.005    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:44.005    11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210608
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210608'
00:14:44.005  killing process with pid 210608
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 210608
00:14:44.005   11:05:00 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 210608
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@14 -- # killprocess 210609
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 210609 ']'
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 210609
00:14:45.909    11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:45.909    11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210609
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=python3
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210609'
00:14:45.909  killing process with pid 210609
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 210609
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 210609
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@154 -- # trap - SIGINT SIGTERM EXIT
00:14:45.909  
00:14:45.909  real	0m8.444s
00:14:45.909  user	0m11.788s
00:14:45.909  sys	0m1.545s
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:45.909   11:05:02 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:45.909  ************************************
00:14:45.909  END TEST sma_nvmf_tcp
00:14:45.909  ************************************
00:14:45.909   11:05:02 sma -- sma/sma.sh@12 -- # run_test sma_vfiouser_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:45.909   11:05:02 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:45.909   11:05:02 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:45.909   11:05:02 sma -- common/autotest_common.sh@10 -- # set +x
00:14:45.909  ************************************
00:14:45.909  START TEST sma_vfiouser_qemu
00:14:45.909  ************************************
00:14:45.909   11:05:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:46.168  * Looking for test storage...
00:14:46.168  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:46.168    11:05:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:46.168     11:05:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # lcov --version
00:14:46.168     11:05:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # IFS=.-:
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # read -ra ver1
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # IFS=.-:
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # read -ra ver2
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@338 -- # local 'op=<'
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@340 -- # ver1_l=2
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@341 -- # ver2_l=1
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@344 -- # case "$op" in
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@345 -- # : 1
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # decimal 1
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=1
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 1
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # ver1[v]=1
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # decimal 2
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=2
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:46.168     11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 2
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # ver2[v]=2
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # return 0
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:46.168  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.168  		--rc genhtml_branch_coverage=1
00:14:46.168  		--rc genhtml_function_coverage=1
00:14:46.168  		--rc genhtml_legend=1
00:14:46.168  		--rc geninfo_all_blocks=1
00:14:46.168  		--rc geninfo_unexecuted_blocks=1
00:14:46.168  		
00:14:46.168  		'
00:14:46.168    11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:46.168  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.168  		--rc genhtml_branch_coverage=1
00:14:46.168  		--rc genhtml_function_coverage=1
00:14:46.168  		--rc genhtml_legend=1
00:14:46.168  		--rc geninfo_all_blocks=1
00:14:46.168  		--rc geninfo_unexecuted_blocks=1
00:14:46.168  		
00:14:46.169  		'
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:46.169  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.169  		--rc genhtml_branch_coverage=1
00:14:46.169  		--rc genhtml_function_coverage=1
00:14:46.169  		--rc genhtml_legend=1
00:14:46.169  		--rc geninfo_all_blocks=1
00:14:46.169  		--rc geninfo_unexecuted_blocks=1
00:14:46.169  		
00:14:46.169  		'
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:46.169  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:46.169  		--rc genhtml_branch_coverage=1
00:14:46.169  		--rc genhtml_function_coverage=1
00:14:46.169  		--rc genhtml_legend=1
00:14:46.169  		--rc geninfo_all_blocks=1
00:14:46.169  		--rc geninfo_unexecuted_blocks=1
00:14:46.169  		
00:14:46.169  		'
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vfio_user/common.sh@6 -- # : 128
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vfio_user/common.sh@7 -- # : 512
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@6 -- # : false
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@7 -- # : /root/vhost_test
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@9 -- # : qemu-img
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:14:46.169       11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:14:46.169     11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:14:46.169      11:05:03 sma.sma_vfiouser_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:14:46.169       11:05:03 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:14:46.169        11:05:03 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup
00:14:46.169        11:05:03 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:14:46.169        11:05:03 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:14:46.169        11:05:03 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # echo 2
00:14:46.169       11:05:03 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]]
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@104 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@107 -- # VM_PASSWORD=root
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@108 -- # vm_no=0
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@110 -- # VFO_ROOT_PATH=/tmp/sma/vfio-user/qemu
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@112 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@113 -- # mkdir -p /tmp/sma/vfio-user/qemu
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@116 -- # used_vms=0
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@117 -- # vm_kill_all
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:14:46.169    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/1
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 1
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 1
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/1
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]]
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@446 -- # return 0
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@119 -- # vm_setup --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disk-type=virtio --force=0 '--qemu-args=-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@518 -- # xtrace_disable
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:46.169  INFO: Creating new VM in /root/vhost_test/vms/0
00:14:46.169  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:14:46.169  INFO: TASK MASK: 1-2
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@671 -- # local node_num=0
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@672 -- # local boot_disk_present=false
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:14:46.169  INFO: NUMA NODE: 0
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:14:46.169   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@677 -- # [[ -n '' ]]
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@686 -- # [[ -z '' ]]
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # IFS=,
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@701 -- # read -r disk disk_type _
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # [[ -z '' ]]
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@702 -- # disk_type=virtio
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@704 -- # case $disk_type in
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:14:46.170  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:14:46.170   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:14:46.738  1024+0 records in
00:14:46.738  1024+0 records out
00:14:46.738  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.450764 s, 2.4 GB/s
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # [[ -n '' ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # (( 1 ))
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:14:46.738  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # cat
00:14:46.738    11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@827 -- # echo 10000
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@828 -- # echo 10001
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@829 -- # echo 10002
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@832 -- # [[ -z '' ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@834 -- # echo 10004
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@835 -- # echo 100
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@837 -- # [[ -z '' ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@838 -- # [[ -z '' ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@124 -- # vm_run 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@843 -- # local run_all=false
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@844 -- # local vms_to_run=
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@846 -- # getopts a-: optchar
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@856 -- # false
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@859 -- # shift 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@860 -- # for vm in "$@"
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@871 -- # vm_is_running 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@373 -- # return 1
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:14:46.738  INFO: running /root/vhost_test/vms/0/run.sh
00:14:46.738   11:05:03 sma.sma_vfiouser_qemu -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:14:46.738  Running VM in /root/vhost_test/vms/0
00:14:46.996  Waiting for QEMU pid file
00:14:47.933  === qemu.log ===
00:14:47.933  === qemu.log ===
00:14:47.933   11:05:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@125 -- # vm_wait_for_boot 300 0
00:14:47.933   11:05:04 sma.sma_vfiouser_qemu -- vhost/common.sh@913 -- # assert_number 300
00:14:47.933   11:05:04 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:14:47.933   11:05:04 sma.sma_vfiouser_qemu -- vhost/common.sh@281 -- # return 0
00:14:47.933   11:05:04 sma.sma_vfiouser_qemu -- vhost/common.sh@915 -- # xtrace_disable
00:14:47.933   11:05:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:14:48.194  INFO: Waiting for VMs to boot
00:14:48.194  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:15:10.134  
00:15:10.134  INFO: VM0 ready
00:15:10.134  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:10.134  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:11.071  INFO: all VMs ready
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- vhost/common.sh@973 -- # return 0
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@129 -- # tgtpid=216927
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@128 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@130 -- # waitforlisten 216927
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@835 -- # '[' -z 216927 ']'
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:11.071  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:11.071   11:05:27 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.071  [2024-12-09 11:05:27.911244] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:15:11.071  [2024-12-09 11:05:27.911347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216927 ]
00:15:11.071  EAL: No free 2048 kB hugepages reported on node 1
00:15:11.071  [2024-12-09 11:05:28.044054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:11.330  [2024-12-09 11:05:28.153028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@868 -- # return 0
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@133 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@134 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.897  [2024-12-09 11:05:28.735179] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@135 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.897  [2024-12-09 11:05:28.743160] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@136 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:11.897  [2024-12-09 11:05:28.751193] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@137 -- # rpc_cmd framework_start_init
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:11.897   11:05:28 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:12.156  [2024-12-09 11:05:28.962863] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:15:12.723   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:12.723   11:05:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@140 -- # rpc_cmd bdev_null_create null0 100 4096
00:15:12.723   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:12.723   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:12.723  null0
00:15:12.723   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@141 -- # rpc_cmd bdev_null_create null1 100 4096
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:12.724  null1
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@160 -- # smapid=217340
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@163 -- # sma_waitforlisten
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:12.724    11:05:29 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # cat
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/common.sh@8 -- # local sma_port=8080
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i = 0 ))
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:12.724   11:05:29 sma.sma_vfiouser_qemu -- sma/common.sh@14 -- # sleep 1s
00:15:12.724  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:12.724  I0000 00:00:1733738729.731799  217340 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i++ ))
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 ))
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- sma/common.sh@12 -- # return 0
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@166 -- # rpc_cmd nvmf_get_transports --trtype VFIOUSER
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:13.660  [
00:15:13.660  {
00:15:13.660  "trtype": "VFIOUSER",
00:15:13.660  "max_queue_depth": 256,
00:15:13.660  "max_io_qpairs_per_ctrlr": 127,
00:15:13.660  "in_capsule_data_size": 0,
00:15:13.660  "max_io_size": 131072,
00:15:13.660  "io_unit_size": 131072,
00:15:13.660  "max_aq_depth": 32,
00:15:13.660  "num_shared_buffers": 0,
00:15:13.660  "buf_cache_size": 0,
00:15:13.660  "dif_insert_or_strip": false,
00:15:13.660  "zcopy": false,
00:15:13.660  "abort_timeout_sec": 0,
00:15:13.660  "ack_timeout": 0,
00:15:13.660  "data_wr_pool_size": 0
00:15:13.660  }
00:15:13.660  ]
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@169 -- # vm_exec 0 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:13.660    11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:13.660    11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:13.660    11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:13.660    11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:13.660    11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:13.660    11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:13.660   11:05:30 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]'
00:15:13.660  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:13.920    11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # create_device 0 0
00:15:13.920    11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # jq -r .handle
00:15:13.920    11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:13.920    11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:13.920    11:05:30 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:14.178  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:14.178  I0000 00:00:1733738731.038200  217582 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:14.178  I0000 00:00:1733738731.040052  217582 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:14.178  [2024-12-09 11:05:31.043725] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:14.437   11:05:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:14.437   11:05:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@173 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:14.437   11:05:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:14.437   11:05:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:14.437  [
00:15:14.437  {
00:15:14.437  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:14.437  "subtype": "NVMe",
00:15:14.437  "listen_addresses": [
00:15:14.437  {
00:15:14.437  "trtype": "VFIOUSER",
00:15:14.437  "adrfam": "IPv4",
00:15:14.437  "traddr": "/var/tmp/vfiouser-0",
00:15:14.437  "trsvcid": ""
00:15:14.437  }
00:15:14.437  ],
00:15:14.437  "allow_any_host": true,
00:15:14.437  "hosts": [],
00:15:14.438  "serial_number": "00000000000000000000",
00:15:14.438  "model_number": "SPDK bdev Controller",
00:15:14.438  "max_namespaces": 32,
00:15:14.438  "min_cntlid": 1,
00:15:14.438  "max_cntlid": 65519,
00:15:14.438  "namespaces": []
00:15:14.438  }
00:15:14.438  ]
00:15:14.438   11:05:31 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:14.438   11:05:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@174 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-0
00:15:14.438   11:05:31 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:14.438  [2024-12-09 11:05:31.432269] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:15.374     11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:15.374     11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:15.374     11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:15.374     11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:15.374     11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:15.374     11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:15.374    11:05:32 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:15.374  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:15.633   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:15:15.633   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # rpc_cmd nvmf_get_subsystems
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # jq -r '. | length'
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:15.633   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # [[ 2 -eq 2 ]]
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # create_device 1 0
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # jq -r .handle
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:15.633    11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:15.891  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:15.891  I0000 00:00:1733738732.692992  217832 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:15.891  I0000 00:00:1733738732.694626  217832 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:15.891  [2024-12-09 11:05:32.700326] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:15.891   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:15.891   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@180 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:15.891   11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:15.891   11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:15.891  [
00:15:15.891  {
00:15:15.891  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:15.891  "subtype": "NVMe",
00:15:15.891  "listen_addresses": [
00:15:15.891  {
00:15:15.891  "trtype": "VFIOUSER",
00:15:15.891  "adrfam": "IPv4",
00:15:15.891  "traddr": "/var/tmp/vfiouser-0",
00:15:15.891  "trsvcid": ""
00:15:15.891  }
00:15:15.891  ],
00:15:15.892  "allow_any_host": true,
00:15:15.892  "hosts": [],
00:15:15.892  "serial_number": "00000000000000000000",
00:15:15.892  "model_number": "SPDK bdev Controller",
00:15:15.892  "max_namespaces": 32,
00:15:15.892  "min_cntlid": 1,
00:15:15.892  "max_cntlid": 65519,
00:15:15.892  "namespaces": []
00:15:15.892  }
00:15:15.892  ]
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@181 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:15.892  [
00:15:15.892  {
00:15:15.892  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:15:15.892  "subtype": "NVMe",
00:15:15.892  "listen_addresses": [
00:15:15.892  {
00:15:15.892  "trtype": "VFIOUSER",
00:15:15.892  "adrfam": "IPv4",
00:15:15.892  "traddr": "/var/tmp/vfiouser-1",
00:15:15.892  "trsvcid": ""
00:15:15.892  }
00:15:15.892  ],
00:15:15.892  "allow_any_host": true,
00:15:15.892  "hosts": [],
00:15:15.892  "serial_number": "00000000000000000000",
00:15:15.892  "model_number": "SPDK bdev Controller",
00:15:15.892  "max_namespaces": 32,
00:15:15.892  "min_cntlid": 1,
00:15:15.892  "max_cntlid": 65519,
00:15:15.892  "namespaces": []
00:15:15.892  }
00:15:15.892  ]
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@182 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 != \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@183 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-1
00:15:15.892   11:05:32 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:16.150  [2024-12-09 11:05:32.996258] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:17.096     11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:17.096     11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:17.096     11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.096     11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.096     11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:17.096     11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:17.096    11:05:33 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:17.096  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:17.096   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme1/subsysnqn
00:15:17.096   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme1/subsysnqn ]]
00:15:17.096    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # rpc_cmd nvmf_get_subsystems
00:15:17.096    11:05:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.096    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # jq -r '. | length'
00:15:17.096    11:05:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:17.096    11:05:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:17.355   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # [[ 3 -eq 3 ]]
00:15:17.355    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # create_device 0 0
00:15:17.355    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # jq -r .handle
00:15:17.355    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:17.355    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:17.355    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:17.355  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:17.355  I0000 00:00:1733738734.329436  218182 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:17.355  I0000 00:00:1733738734.331120  218182 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:17.614   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # tmp0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # create_device 1 0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # jq -r .handle
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:17.614  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:17.614  I0000 00:00:1733738734.563131  218299 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:17.614  I0000 00:00:1733738734.564532  218299 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:17.614   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # tmp1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # vm_count_nvme 0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:17.614     11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:17.614     11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:17.614     11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:17.614     11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:17.614     11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:17.614     11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:17.614    11:05:34 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:17.873  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:17.873   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # [[ 2 -eq 2 ]]
00:15:17.873    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # rpc_cmd nvmf_get_subsystems
00:15:17.873    11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # jq -r '. | length'
00:15:17.873    11:05:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:17.873    11:05:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:17.873    11:05:34 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:18.131   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # [[ 3 -eq 3 ]]
00:15:18.131   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@196 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\0 ]]
00:15:18.131   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@197 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-1 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]]
00:15:18.131   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@200 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:18.131   11:05:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:18.131  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:18.131  I0000 00:00:1733738735.078687  218331 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:18.131  I0000 00:00:1733738735.080363  218331 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:18.131  {}
00:15:18.131   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@201 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:18.131   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:18.132    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:18.132  [2024-12-09 11:05:35.128099] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:18.132  request:
00:15:18.132  {
00:15:18.132  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:18.132  "method": "nvmf_get_subsystems",
00:15:18.132  "req_id": 1
00:15:18.132  }
00:15:18.132  Got JSON-RPC error response
00:15:18.132  response:
00:15:18.132  {
00:15:18.132  "code": -19,
00:15:18.132  "message": "No such device"
00:15:18.132  }
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@202 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.132   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:18.389  [
00:15:18.389  {
00:15:18.389  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:15:18.389  "subtype": "NVMe",
00:15:18.389  "listen_addresses": [
00:15:18.389  {
00:15:18.389  "trtype": "VFIOUSER",
00:15:18.389  "adrfam": "IPv4",
00:15:18.389  "traddr": "/var/tmp/vfiouser-1",
00:15:18.389  "trsvcid": ""
00:15:18.389  }
00:15:18.389  ],
00:15:18.389  "allow_any_host": true,
00:15:18.389  "hosts": [],
00:15:18.389  "serial_number": "00000000000000000000",
00:15:18.389  "model_number": "SPDK bdev Controller",
00:15:18.389  "max_namespaces": 32,
00:15:18.389  "min_cntlid": 1,
00:15:18.389  "max_cntlid": 65519,
00:15:18.389  "namespaces": []
00:15:18.389  }
00:15:18.389  ]
00:15:18.389   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:18.389    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # jq -r '. | length'
00:15:18.389    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # rpc_cmd nvmf_get_subsystems
00:15:18.389    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.389    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:18.389    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:18.389   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # [[ 2 -eq 2 ]]
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # vm_count_nvme 0
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:18.390     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:18.390     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:18.390     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:18.390     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:18.390     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:18.390     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:18.390    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:18.390  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:18.647   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # [[ 1 -eq 1 ]]
00:15:18.648   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@206 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:18.648   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:18.648  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:18.648  I0000 00:00:1733738735.615519  218555 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:18.648  I0000 00:00:1733738735.617239  218555 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:18.648  {}
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@207 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:18.907  [2024-12-09 11:05:35.673635] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:18.907  request:
00:15:18.907  {
00:15:18.907  "nqn": "nqn.2016-06.io.spdk:vfiouser-0",
00:15:18.907  "method": "nvmf_get_subsystems",
00:15:18.907  "req_id": 1
00:15:18.907  }
00:15:18.907  Got JSON-RPC error response
00:15:18.907  response:
00:15:18.907  {
00:15:18.907  "code": -19,
00:15:18.907  "message": "No such device"
00:15:18.907  }
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@208 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:18.907  [2024-12-09 11:05:35.689687] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:18.907  request:
00:15:18.907  {
00:15:18.907  "nqn": "nqn.2016-06.io.spdk:vfiouser-1",
00:15:18.907  "method": "nvmf_get_subsystems",
00:15:18.907  "req_id": 1
00:15:18.907  }
00:15:18.907  Got JSON-RPC error response
00:15:18.907  response:
00:15:18.907  {
00:15:18.907  "code": -19,
00:15:18.907  "message": "No such device"
00:15:18.907  }
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # rpc_cmd nvmf_get_subsystems
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # jq -r '. | length'
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:18.907   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # [[ 1 -eq 1 ]]
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # vm_count_nvme 0
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:18.907     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:18.907     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:18.907     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:18.907     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:18.907     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:18.907     11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:18.907    11:05:35 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true'
00:15:18.907  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:19.167   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # [[ 0 -eq 0 ]]
00:15:19.167   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@213 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:19.167   11:05:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:19.426  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:19.426  I0000 00:00:1733738736.182101  218607 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:19.426  I0000 00:00:1733738736.183967  218607 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:19.426  [2024-12-09 11:05:36.187207] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:19.426  {}
00:15:19.426   11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@214 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:19.426   11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:19.426  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:19.426  I0000 00:00:1733738736.417501  218629 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:19.426  I0000 00:00:1733738736.419152  218629 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:19.426  [2024-12-09 11:05:36.423860] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:19.426  {}
00:15:19.685    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # create_device 0 0
00:15:19.685    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:19.685    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:19.685    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # jq -r .handle
00:15:19.685    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:19.685  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:19.685  I0000 00:00:1733738736.668169  218665 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:19.685  I0000 00:00:1733738736.669734  218665 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:19.685  [2024-12-09 11:05:36.672499] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:19.944   11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:19.944    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # create_device 1 0
00:15:19.944    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1
00:15:19.944    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # jq -r .handle
00:15:19.944    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:19.944    11:05:36 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:20.203  [2024-12-09 11:05:36.972444] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:20.203  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:20.203  I0000 00:00:1733738737.032256  218874 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:20.203  I0000 00:00:1733738737.033946  218874 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:20.203  [2024-12-09 11:05:37.037651] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist
00:15:20.203   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # jq -r '.[].uuid'
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:20.203   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # uuid0=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # rpc_cmd bdev_get_bdevs -b null1
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # jq -r '.[].uuid'
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:20.203    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:20.462    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:20.462   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # uuid1=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:20.462   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@223 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:20.462   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:20.462    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:20.462    11:05:37 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:20.462  [2024-12-09 11:05:37.340567] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller
00:15:20.721  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:20.721  I0000 00:00:1733738737.577442  218911 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:20.721  I0000 00:00:1733738737.579393  218911 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:20.721  {}
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # jq -r '.[0].namespaces | length'
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:20.721   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # [[ 1 -eq 1 ]]
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # jq -r '.[0].namespaces | length'
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:20.721    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:20.979   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # [[ 0 -eq 0 ]]
00:15:20.979    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:20.979    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # jq -r '.[0].namespaces[0].uuid'
00:15:20.979    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:20.979    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:20.979    11:05:37 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:20.979   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # [[ dab5a342-21e6-4c60-9f38-c3c450675d49 == \d\a\b\5\a\3\4\2\-\2\1\e\6\-\4\c\6\0\-\9\f\3\8\-\c\3\c\4\5\0\6\7\5\d\4\9 ]]
00:15:20.979   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@227 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:20.979   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:20.979   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:20.980   11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:20.980     11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:20.980     11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:20.980     11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:20.980     11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:20.980     11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:20.980     11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:20.980    11:05:37 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:20.980  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:21.238   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:21.238   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:21.238     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:21.238     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:21.238     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:21.238     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:21.238     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:21.238     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:21.238  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:21.238   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:15:21.238   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:15:21.238   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@229 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:21.238   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:21.238    11:05:38 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:21.497  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:21.497  I0000 00:00:1733738738.470355  219157 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:21.497  I0000 00:00:1733738738.471877  219157 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:21.755  {}
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # jq -r '.[0].namespaces | length'
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.755   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # [[ 1 -eq 1 ]]
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.755    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # jq -r '.[0].namespaces | length'
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # [[ 1 -eq 1 ]]
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # jq -r '.[0].namespaces[0].uuid'
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # [[ dab5a342-21e6-4c60-9f38-c3c450675d49 == \d\a\b\5\a\3\4\2\-\2\1\e\6\-\4\c\6\0\-\9\f\3\8\-\c\3\c\4\5\0\6\7\5\d\4\9 ]]
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # jq -r '.[0].namespaces[0].uuid'
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # [[ 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 == \7\e\b\8\d\5\8\a\-\8\5\2\a\-\4\8\0\2\-\b\b\8\c\-\9\9\b\f\c\6\b\7\7\e\8\9 ]]
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@234 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:21.756   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:21.756     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:21.756     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:21.756     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:21.756     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:21.756     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:21.756     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:21.756    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:21.756  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:22.014   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:22.014   11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:22.014     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:22.014     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:22.014     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:22.014     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:22.014     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:22.014     11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:22.014    11:05:38 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:22.014  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:22.272   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:15:22.272   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:15:22.272   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@237 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:22.272   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:22.272    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:22.272    11:05:39 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:22.531  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:22.531  I0000 00:00:1733738739.416364  219409 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:22.531  I0000 00:00:1733738739.418135  219409 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:22.531  {}
00:15:22.531   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@238 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:22.531   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:22.531    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:22.531    11:05:39 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:22.789  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:22.789  I0000 00:00:1733738739.760917  219433 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:22.789  I0000 00:00:1733738739.762883  219433 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:23.048  {}
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # jq -r '.[0].namespaces | length'
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # [[ 1 -eq 1 ]]
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # jq -r '.[0].namespaces | length'
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # [[ 1 -eq 1 ]]
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # jq -r '.[0].namespaces[0].uuid'
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # [[ dab5a342-21e6-4c60-9f38-c3c450675d49 == \d\a\b\5\a\3\4\2\-\2\1\e\6\-\4\c\6\0\-\9\f\3\8\-\c\3\c\4\5\0\6\7\5\d\4\9 ]]
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # jq -r '.[0].namespaces[0].uuid'
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # [[ 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 == \7\e\b\8\d\5\8\a\-\8\5\2\a\-\4\8\0\2\-\b\b\8\c\-\9\9\b\f\c\6\b\7\7\e\8\9 ]]
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@243 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:23.048   11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.048    11:05:39 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.048     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.048     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.048     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.048     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.048     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.048     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.048    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:23.048  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.307   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:23.307   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.307     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.307     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.307     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.307     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.307     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.307     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.307    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:23.307  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@244 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:23.566    11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:23.566   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:23.566    11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:23.566    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.566    11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:23.566    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.566    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.567    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.567    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.567     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.567     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.567     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.567     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.567     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.567     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.567    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:23.567  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:23.826   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:23.826   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:23.826     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:23.826     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:23.826     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:23.826     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:23.826     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:23.826     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:23.826    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:23.826  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@245 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:24.085   11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.085     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.085     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.085     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.085     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.085     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.085     11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.085    11:05:40 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:24.085  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.085   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:24.086   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.086     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.086     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.086     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.086     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.086     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.086     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.086    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:24.344  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@246 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:24.344   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.344     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.344     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.344     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.344     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.344     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.344     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.344    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:24.344  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.603   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:24.603   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:24.603     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:24.603     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:24.603     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:24.603     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:24.603     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:24.603     11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:24.603    11:05:41 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:24.603  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@249 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:24.862   11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:24.862    11:05:41 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:24.862    11:05:41 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:25.121  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:25.121  I0000 00:00:1733738741.969406  219927 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:25.121  I0000 00:00:1733738741.971169  219927 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:25.121  {}
00:15:25.121   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@250 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:25.121   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:25.121    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:25.121    11:05:42 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:25.379  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:25.379  I0000 00:00:1733738742.323315  219958 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:25.379  I0000 00:00:1733738742.325043  219958 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:25.379  {}
00:15:25.379    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:25.379    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.379    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # jq -r '.[0].namespaces | length'
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # [[ 1 -eq 1 ]]
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # jq -r '.[0].namespaces | length'
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # [[ 1 -eq 1 ]]
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # jq -r '.[0].namespaces[0].uuid'
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # [[ dab5a342-21e6-4c60-9f38-c3c450675d49 == \d\a\b\5\a\3\4\2\-\2\1\e\6\-\4\c\6\0\-\9\f\3\8\-\c\3\c\4\5\0\6\7\5\d\4\9 ]]
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # jq -r '.[0].namespaces[0].uuid'
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # [[ 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 == \7\e\b\8\d\5\8\a\-\8\5\2\a\-\4\8\0\2\-\b\b\8\c\-\9\9\b\f\c\6\b\7\7\e\8\9 ]]
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@255 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:25.637   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:25.637     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:25.637     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:25.637     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:25.637     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:25.637     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:25.637     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:25.637    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:25.637  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:25.896   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:25.896   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:25.896     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:25.896     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:25.896     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:25.896     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:25.896     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:25.896     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:25.896    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:25.896  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]]
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@256 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:26.155   11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:26.155     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:26.155     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:26.155     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.155     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.155     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:26.155     11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:26.155    11:05:42 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:26.155  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:26.414   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:26.414   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:26.414    11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:26.414    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:26.414    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.414    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.414    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:26.414    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:26.414     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:26.414     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:26.414     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.414     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.414     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:26.414     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:26.415    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:26.415  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@257 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:26.673  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:26.673   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:26.673     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:26.673    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:26.931  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]]
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@258 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:26.931    11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:26.931   11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:26.931    11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:26.931    11:05:43 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:26.932    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:26.932    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.932    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.932    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:26.932    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:26.932     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:26.932     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:26.932     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:26.932     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:26.932     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:26.932     11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:26.932    11:05:43 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:26.932  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:27.189   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:27.189   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:27.190     11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:27.190     11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:27.190     11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:27.190     11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:27.190     11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:27.190     11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:27.190    11:05:44 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:27.190  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@261 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:27.447   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:27.448    11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:27.448    11:05:44 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:27.706  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:27.706  I0000 00:00:1733738744.589342  220457 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:27.706  I0000 00:00:1733738744.591232  220457 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:27.706  {}
00:15:27.706   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@262 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:27.706   11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:27.706    11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:27.706    11:05:44 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:27.964  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:27.964  I0000 00:00:1733738744.920646  220481 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:27.964  I0000 00:00:1733738744.922183  220481 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:27.964  {}
00:15:28.265    11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:28.265    11:05:44 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # jq -r '.[0].namespaces | length'
00:15:28.265    11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.265    11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:28.265    11:05:44 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # [[ 0 -eq 0 ]]
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # jq -r '.[0].namespaces | length'
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # [[ 0 -eq 0 ]]
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@265 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0
00:15:28.265   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:28.265     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:28.265     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:28.265     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.265     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.265     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:28.265     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:28.265    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn'
00:15:28.265  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]]
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:28.525     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:28.525     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:28.525     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.525     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.525     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:28.525     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:28.525    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l dab5a342-21e6-4c60-9f38-c3c450675d49 /sys/class/nvme/nvme0/nvme*/uuid'
00:15:28.525  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:28.525  grep: /sys/class/nvme/nvme0/nvme*/uuid: No such file or directory
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@266 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:28.525   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_volume
00:15:28.526   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_volume
00:15:28.526   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:28.526   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:28.526   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0
00:15:28.526   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1
00:15:28.526   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}'
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:28.526     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:28.526     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:28.526     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.526     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.526     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:28.526     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:28.526    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn'
00:15:28.526  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:28.784   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1
00:15:28.784   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]]
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:28.784     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:28.784     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:28.784     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:28.784     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:28.784     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:28.784     11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:28.784    11:05:45 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 7eb8d58a-852a-4802-bb8c-99bfc6b77e89 /sys/class/nvme/nvme1/nvme*/uuid'
00:15:28.784  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:29.042  grep: /sys/class/nvme/nvme1/nvme*/uuid: No such file or directory
00:15:29.042   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=
00:15:29.042   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]]
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@269 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:29.043   11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:29.043    11:05:45 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:29.043    11:05:45 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:29.301  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:29.301  I0000 00:00:1733738746.199360  220758 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:29.301  I0000 00:00:1733738746.201052  220758 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:29.301  {}
00:15:29.301   11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@270 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:29.301   11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:29.301    11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:29.301    11:05:46 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:29.560  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:29.560  I0000 00:00:1733738746.542171  220968 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:29.560  I0000 00:00:1733738746.544005  220968 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:29.819  {}
00:15:29.819   11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@271 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:29.819   11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:29.819    11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 7eb8d58a-852a-4802-bb8c-99bfc6b77e89
00:15:29.819    11:05:46 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:30.078  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:30.078  I0000 00:00:1733738746.844096  220991 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:30.079  I0000 00:00:1733738746.845937  220991 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:30.079  {}
00:15:30.079   11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@272 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:30.079   11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:30.079    11:05:46 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:30.079    11:05:46 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:30.337  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:30.337  I0000 00:00:1733738747.138621  221014 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:30.337  I0000 00:00:1733738747.140521  221014 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:30.337  {}
00:15:30.337   11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@274 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:30.337   11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:30.596  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:30.596  I0000 00:00:1733738747.396613  221171 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:30.596  I0000 00:00:1733738747.398314  221171 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:30.596  {}
00:15:30.596   11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@275 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1
00:15:30.596   11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:30.856  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:30.856  I0000 00:00:1733738747.649662  221263 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:30.856  I0000 00:00:1733738747.651309  221263 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:30.856  {}
00:15:30.856    11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # create_device 42 0
00:15:30.856    11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=42
00:15:30.856    11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:30.856    11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:30.856    11:05:47 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # jq -r .handle
00:15:31.114  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:31.114  I0000 00:00:1733738747.895890  221287 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:31.114  I0000 00:00:1733738747.897663  221287 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:31.114  [2024-12-09 11:05:47.900006] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-42' does not exist
00:15:31.114   11:05:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # device3=nvme:nqn.2016-06.io.spdk:vfiouser-42
00:15:31.114   11:05:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@279 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:31.114   11:05:48 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:31.372  [2024-12-09 11:05:48.238892] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-42: enabling controller
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:32.307     11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:32.307     11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:32.307     11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:32.307     11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:32.307     11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:32.307     11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:32.307    11:05:49 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:32.307  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:32.307   11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn
00:15:32.307   11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]]
00:15:32.307   11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@282 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-42
00:15:32.307   11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:32.565  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:32.565  I0000 00:00:1733738749.447368  221531 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:32.565  I0000 00:00:1733738749.449149  221531 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:32.565  {}
00:15:32.565   11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@283 -- # NOT vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:32.565   11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # local es=0
00:15:32.565   11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@654 -- # valid_exec_arg vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:32.565   11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@640 -- # local arg=vm_check_subsys_nqn
00:15:32.565   11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:32.565    11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # type -t vm_check_subsys_nqn
00:15:32.566   11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:32.566   11:05:49 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42
00:15:32.566   11:05:49 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@338 -- # local vm_num=0
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@339 -- # shift
00:15:33.501     11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:15:33.501     11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:15:33.501     11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:33.501     11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:33.501     11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:15:33.501     11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:15:33.501    11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn'
00:15:33.759  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:15:33.759  grep: /sys/class/nvme/*/subsysnqn: No such file or directory
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z '' ]]
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@92 -- # error 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@82 -- # echo ===========
00:15:33.759  ===========
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@83 -- # message ERROR 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=ERROR
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42'
00:15:33.759  ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@84 -- # echo ===========
00:15:33.759  ===========
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- vhost/common.sh@86 -- # false
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@93 -- # return 1
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@655 -- # es=1
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:33.759   11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@285 -- # key0=1234567890abcdef1234567890abcdef
00:15:33.759    11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # create_device 0 0
00:15:33.759    11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:33.759    11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:33.759    11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:33.759    11:05:50 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # jq -r .handle
00:15:34.018  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:34.018  I0000 00:00:1733738750.943352  221836 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:34.018  I0000 00:00:1733738750.945293  221836 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:34.018  [2024-12-09 11:05:50.948922] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:34.276   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # jq -r '.[].uuid'
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.276   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # uuid0=dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:34.276   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # get_cipher AES_CBC
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/common.sh@27 -- # case "$1" in
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/common.sh@28 -- # echo 0
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # format_key 1234567890abcdef1234567890abcdef
00:15:34.276    11:05:51 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:15:34.276     11:05:51 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:15:34.276  [2024-12-09 11:05:51.250640] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:34.535  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:34.535  I0000 00:00:1733738751.418168  221985 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:34.535  I0000 00:00:1733738751.420074  221985 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:34.535  {}
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # jq -r '.[0].namespaces[0].name'
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.535   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # ns_bdev=8da43e5d-9cdd-4b85-bde8-fec3547fdccd
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # jq -r '.[0].product_name'
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # rpc_cmd bdev_get_bdevs -b 8da43e5d-9cdd-4b85-bde8-fec3547fdccd
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.535    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.793    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.793   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # [[ crypto == \c\r\y\p\t\o ]]
00:15:34.793    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # rpc_cmd bdev_get_bdevs -b 8da43e5d-9cdd-4b85-bde8-fec3547fdccd
00:15:34.793    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.793    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.793    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # jq -r '.[] | select(.product_name == "crypto")'
00:15:34.793    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.793   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # crypto_bdev='{
00:15:34.793    "name": "8da43e5d-9cdd-4b85-bde8-fec3547fdccd",
00:15:34.793    "aliases": [
00:15:34.793      "a9c7798a-2098-54b9-b723-f087b2985479"
00:15:34.793    ],
00:15:34.793    "product_name": "crypto",
00:15:34.793    "block_size": 4096,
00:15:34.793    "num_blocks": 25600,
00:15:34.793    "uuid": "a9c7798a-2098-54b9-b723-f087b2985479",
00:15:34.793    "assigned_rate_limits": {
00:15:34.793      "rw_ios_per_sec": 0,
00:15:34.793      "rw_mbytes_per_sec": 0,
00:15:34.793      "r_mbytes_per_sec": 0,
00:15:34.793      "w_mbytes_per_sec": 0
00:15:34.793    },
00:15:34.793    "claimed": true,
00:15:34.793    "claim_type": "exclusive_write",
00:15:34.793    "zoned": false,
00:15:34.793    "supported_io_types": {
00:15:34.793      "read": true,
00:15:34.793      "write": true,
00:15:34.793      "unmap": false,
00:15:34.793      "flush": false,
00:15:34.793      "reset": true,
00:15:34.793      "nvme_admin": false,
00:15:34.793      "nvme_io": false,
00:15:34.793      "nvme_io_md": false,
00:15:34.793      "write_zeroes": true,
00:15:34.793      "zcopy": false,
00:15:34.793      "get_zone_info": false,
00:15:34.793      "zone_management": false,
00:15:34.793      "zone_append": false,
00:15:34.793      "compare": false,
00:15:34.793      "compare_and_write": false,
00:15:34.793      "abort": false,
00:15:34.793      "seek_hole": false,
00:15:34.793      "seek_data": false,
00:15:34.793      "copy": false,
00:15:34.793      "nvme_iov_md": false
00:15:34.793    },
00:15:34.793    "memory_domains": [
00:15:34.793      {
00:15:34.793        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:34.793        "dma_device_type": 2
00:15:34.793      }
00:15:34.793    ],
00:15:34.793    "driver_specific": {
00:15:34.793      "crypto": {
00:15:34.793        "base_bdev_name": "null0",
00:15:34.793        "name": "8da43e5d-9cdd-4b85-bde8-fec3547fdccd",
00:15:34.793        "key_name": "8da43e5d-9cdd-4b85-bde8-fec3547fdccd_AES_CBC"
00:15:34.793      }
00:15:34.793    }
00:15:34.793  }'
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # rpc_cmd bdev_get_bdevs
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # [[ 1 -eq 1 ]]
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # jq -r .driver_specific.crypto.key_name
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # key_name=8da43e5d-9cdd-4b85-bde8-fec3547fdccd_AES_CBC
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # rpc_cmd accel_crypto_keys_get -k 8da43e5d-9cdd-4b85-bde8-fec3547fdccd_AES_CBC
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # key_obj='[
00:15:34.794  {
00:15:34.794  "name": "8da43e5d-9cdd-4b85-bde8-fec3547fdccd_AES_CBC",
00:15:34.794  "cipher": "AES_CBC",
00:15:34.794  "key": "1234567890abcdef1234567890abcdef"
00:15:34.794  }
00:15:34.794  ]'
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # jq -r '.[0].key'
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # jq -r '.[0].cipher'
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@317 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:34.794   11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:34.794    11:05:51 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:35.363  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:35.363  I0000 00:00:1733738752.071186  222043 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:35.363  I0000 00:00:1733738752.072948  222043 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:35.363  {}
00:15:35.363   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@318 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:35.363   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:35.363  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:35.363  I0000 00:00:1733738752.353751  222258 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:35.363  I0000 00:00:1733738752.355597  222258 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:35.622  {}
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # rpc_cmd bdev_get_bdevs
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r '.[] | select(.product_name == "crypto")'
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r length
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:35.622   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # [[ '' -eq 0 ]]
00:15:35.622   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@322 -- # device_vfio_user=1
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # create_device 0 0
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:35.622    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # jq -r .handle
00:15:35.880  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:35.880  I0000 00:00:1733738752.657673  222298 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:35.880  I0000 00:00:1733738752.659496  222298 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:35.880  [2024-12-09 11:05:52.666130] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist
00:15:35.880   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:35.881   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@324 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:35.881   11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:35.881    11:05:52 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:35.881    11:05:52 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:36.138  [2024-12-09 11:05:52.966814] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller
00:15:36.138  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:36.138  I0000 00:00:1733738753.111989  222320 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:36.138  I0000 00:00:1733738753.113766  222320 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:36.396  {}
00:15:36.396    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # get_qos_caps 1
00:15:36.396   11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # diff /dev/fd/62 /dev/fd/61
00:15:36.396    11:05:53 sma.sma_vfiouser_qemu -- sma/common.sh@45 -- # local rootdir
00:15:36.396    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:15:36.396    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys
00:15:36.397     11:05:53 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:36.397    11:05:53 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:15:36.397    11:05:53 sma.sma_vfiouser_qemu -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:15:36.397  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:36.397  I0000 00:00:1733738753.392206  222477 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:36.397  I0000 00:00:1733738753.394040  222477 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:36.655   11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:36.655    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:36.655    11:05:53 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:36.655  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:36.655  I0000 00:00:1733738753.660335  222571 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:36.655  I0000 00:00:1733738753.662106  222571 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:36.913  {}
00:15:36.913    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # rpc_cmd bdev_get_bdevs -b null0
00:15:36.913   11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # diff /dev/fd/62 /dev/fd/61
00:15:36.913    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys
00:15:36.913    11:05:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:36.913    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys '.[].assigned_rate_limits'
00:15:36.913    11:05:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:36.913    11:05:53 sma.sma_vfiouser_qemu -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:36.914   11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@370 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:36.914   11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:36.914    11:05:53 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 dab5a342-21e6-4c60-9f38-c3c450675d49
00:15:36.914    11:05:53 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python
00:15:37.171  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:37.171  I0000 00:00:1733738754.018735  222603 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:37.171  I0000 00:00:1733738754.020455  222603 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:37.171  {}
00:15:37.171   11:05:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@371 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0
00:15:37.171   11:05:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:37.430  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:37.430  I0000 00:00:1733738754.291614  222630 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:37.430  I0000 00:00:1733738754.293645  222630 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:37.430  {}
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@373 -- # cleanup
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@98 -- # vm_kill_all
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@476 -- # local vm
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # vm_list_all
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # vms=()
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@466 -- # local vms
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@478 -- # vm_kill 0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@309 -- # return 0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@449 -- # local vm_pid
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@450 -- # vm_pid=212762
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=212762)'
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=212762)'
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=212762)'
00:15:37.430  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=212762)
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@454 -- # /bin/kill 212762
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@455 -- # notice 'process 212762 killed'
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'process 212762 killed'
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out=
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: process 212762 killed'
00:15:37.430  INFO: process 212762 killed
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@99 -- # killprocess 216927
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 216927 ']'
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 216927
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:37.430    11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216927
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216927'
00:15:37.430  killing process with pid 216927
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 216927
00:15:37.430   11:05:54 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 216927
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@100 -- # killprocess 217340
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # '[' -z 217340 ']'
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@958 -- # kill -0 217340
00:15:39.332    11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # uname
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:39.332    11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217340
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # process_name=python3
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217340'
00:15:39.332  killing process with pid 217340
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@973 -- # kill 217340
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@978 -- # wait 217340
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # '[' -e /tmp/sma/vfio-user/qemu ']'
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # rm -rf /tmp/sma/vfio-user/qemu
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@374 -- # trap - SIGINT SIGTERM EXIT
00:15:39.332  
00:15:39.332  real	0m53.229s
00:15:39.332  user	0m39.055s
00:15:39.332  sys	0m3.503s
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:39.332   11:05:56 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x
00:15:39.332  ************************************
00:15:39.332  END TEST sma_vfiouser_qemu
00:15:39.332  ************************************
00:15:39.332   11:05:56 sma -- sma/sma.sh@13 -- # run_test sma_plugins /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:15:39.332   11:05:56 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:39.332   11:05:56 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:39.332   11:05:56 sma -- common/autotest_common.sh@10 -- # set +x
00:15:39.332  ************************************
00:15:39.332  START TEST sma_plugins
00:15:39.332  ************************************
00:15:39.332   11:05:56 sma.sma_plugins -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh
00:15:39.332  * Looking for test storage...
00:15:39.332  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:39.332     11:05:56 sma.sma_plugins -- common/autotest_common.sh@1711 -- # lcov --version
00:15:39.332     11:05:56 sma.sma_plugins -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@336 -- # IFS=.-:
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@336 -- # read -ra ver1
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@337 -- # IFS=.-:
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@337 -- # read -ra ver2
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@338 -- # local 'op=<'
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@340 -- # ver1_l=2
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@341 -- # ver2_l=1
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@344 -- # case "$op" in
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@345 -- # : 1
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@365 -- # decimal 1
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@353 -- # local d=1
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@355 -- # echo 1
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@365 -- # ver1[v]=1
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@366 -- # decimal 2
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@353 -- # local d=2
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:39.332     11:05:56 sma.sma_plugins -- scripts/common.sh@355 -- # echo 2
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@366 -- # ver2[v]=2
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:39.332    11:05:56 sma.sma_plugins -- scripts/common.sh@368 -- # return 0
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:39.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:39.332  		--rc genhtml_branch_coverage=1
00:15:39.332  		--rc genhtml_function_coverage=1
00:15:39.332  		--rc genhtml_legend=1
00:15:39.332  		--rc geninfo_all_blocks=1
00:15:39.332  		--rc geninfo_unexecuted_blocks=1
00:15:39.332  		
00:15:39.332  		'
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:39.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:39.332  		--rc genhtml_branch_coverage=1
00:15:39.332  		--rc genhtml_function_coverage=1
00:15:39.332  		--rc genhtml_legend=1
00:15:39.332  		--rc geninfo_all_blocks=1
00:15:39.332  		--rc geninfo_unexecuted_blocks=1
00:15:39.332  		
00:15:39.332  		'
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:39.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:39.332  		--rc genhtml_branch_coverage=1
00:15:39.332  		--rc genhtml_function_coverage=1
00:15:39.332  		--rc genhtml_legend=1
00:15:39.332  		--rc geninfo_all_blocks=1
00:15:39.332  		--rc geninfo_unexecuted_blocks=1
00:15:39.332  		
00:15:39.332  		'
00:15:39.332    11:05:56 sma.sma_plugins -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:39.332  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:39.332  		--rc genhtml_branch_coverage=1
00:15:39.332  		--rc genhtml_function_coverage=1
00:15:39.332  		--rc genhtml_legend=1
00:15:39.332  		--rc geninfo_all_blocks=1
00:15:39.332  		--rc geninfo_unexecuted_blocks=1
00:15:39.332  		
00:15:39.332  		'
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@28 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@30 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@31 -- # tgtpid=223137
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@43 -- # smapid=223138
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@34 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@45 -- # sma_waitforlisten
00:15:39.332   11:05:56 sma.sma_plugins -- sma/plugins.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:39.332   11:05:56 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:39.332   11:05:56 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:39.332    11:05:56 sma.sma_plugins -- sma/plugins.sh@34 -- # cat
00:15:39.332   11:05:56 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:39.332   11:05:56 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:39.332   11:05:56 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:39.332   11:05:56 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:39.590  [2024-12-09 11:05:56.382722] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:15:39.590  [2024-12-09 11:05:56.382871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223137 ]
00:15:39.590  EAL: No free 2048 kB hugepages reported on node 1
00:15:39.590  [2024-12-09 11:05:56.489702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:39.590  [2024-12-09 11:05:56.585951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:40.525   11:05:57 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:40.525   11:05:57 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:40.525   11:05:57 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:40.525   11:05:57 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:40.525  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:40.525  I0000 00:00:1733738757.495606  223138 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:41.460   11:05:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:41.460   11:05:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:41.460   11:05:58 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:41.460   11:05:58 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:41.460    11:05:58 sma.sma_plugins -- sma/plugins.sh@47 -- # create_device nvme
00:15:41.460    11:05:58 sma.sma_plugins -- sma/plugins.sh@47 -- # jq -r .handle
00:15:41.460    11:05:58 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:41.718  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:41.718  I0000 00:00:1733738758.548271  223583 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:41.718  I0000 00:00:1733738758.550049  223583 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:41.718   11:05:58 sma.sma_plugins -- sma/plugins.sh@47 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:41.718    11:05:58 sma.sma_plugins -- sma/plugins.sh@48 -- # create_device nvmf_tcp
00:15:41.718    11:05:58 sma.sma_plugins -- sma/plugins.sh@48 -- # jq -r .handle
00:15:41.718    11:05:58 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:41.976  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:41.976  I0000 00:00:1733738758.763575  223608 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:41.976  I0000 00:00:1733738758.765085  223608 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:41.976   11:05:58 sma.sma_plugins -- sma/plugins.sh@48 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:41.976   11:05:58 sma.sma_plugins -- sma/plugins.sh@50 -- # killprocess 223138
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 223138 ']'
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 223138
00:15:41.976    11:05:58 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:41.976    11:05:58 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223138
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223138'
00:15:41.976  killing process with pid 223138
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 223138
00:15:41.976   11:05:58 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 223138
00:15:41.976   11:05:58 sma.sma_plugins -- sma/plugins.sh@61 -- # smapid=223637
00:15:41.976   11:05:58 sma.sma_plugins -- sma/plugins.sh@62 -- # sma_waitforlisten
00:15:41.976   11:05:58 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:41.976    11:05:58 sma.sma_plugins -- sma/plugins.sh@53 -- # cat
00:15:41.976   11:05:58 sma.sma_plugins -- sma/plugins.sh@53 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:41.976   11:05:58 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:41.976   11:05:58 sma.sma_plugins -- sma/plugins.sh@53 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:41.976   11:05:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:41.976   11:05:58 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:41.976   11:05:58 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:41.976   11:05:58 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:42.234  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:42.234  I0000 00:00:1733738759.115014  223637 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.168   11:05:59 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:43.168   11:05:59 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:43.168   11:05:59 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:43.168   11:05:59 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:43.168    11:05:59 sma.sma_plugins -- sma/plugins.sh@64 -- # jq -r .handle
00:15:43.168    11:05:59 sma.sma_plugins -- sma/plugins.sh@64 -- # create_device nvmf_tcp
00:15:43.168    11:05:59 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:43.168  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.168  I0000 00:00:1733738760.142815  223873 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.168  I0000 00:00:1733738760.144524  223873 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:43.168   11:06:00 sma.sma_plugins -- sma/plugins.sh@64 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:43.168   11:06:00 sma.sma_plugins -- sma/plugins.sh@65 -- # NOT create_device nvme
00:15:43.168   11:06:00 sma.sma_plugins -- common/autotest_common.sh@652 -- # local es=0
00:15:43.168   11:06:00 sma.sma_plugins -- common/autotest_common.sh@654 -- # valid_exec_arg create_device nvme
00:15:43.168   11:06:00 sma.sma_plugins -- common/autotest_common.sh@640 -- # local arg=create_device
00:15:43.168   11:06:00 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:43.168    11:06:00 sma.sma_plugins -- common/autotest_common.sh@644 -- # type -t create_device
00:15:43.168   11:06:00 sma.sma_plugins -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:15:43.168   11:06:00 sma.sma_plugins -- common/autotest_common.sh@655 -- # create_device nvme
00:15:43.168   11:06:00 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:43.427  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.427  I0000 00:00:1733738760.394220  223924 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:43.427  I0000 00:00:1733738760.395784  223924 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:43.427  Traceback (most recent call last):
00:15:43.427    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:15:43.427      main(sys.argv[1:])
00:15:43.427    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:15:43.427      result = client.call(request['method'], request.get('params', {}))
00:15:43.427               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:43.427    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:15:43.427      response = func(request=json_format.ParseDict(params, input()))
00:15:43.427                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:43.427    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:15:43.427      return _end_unary_response_blocking(state, call, False, None)
00:15:43.427             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:43.427    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:15:43.427      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:15:43.427      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:15:43.427  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:15:43.427  	status = StatusCode.INVALID_ARGUMENT
00:15:43.427  	details = "Unsupported device type"
00:15:43.427  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-09T11:06:00.397976487+01:00", grpc_status:3, grpc_message:"Unsupported device type"}"
00:15:43.427  >
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@655 -- # es=1
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:15:43.427   11:06:00 sma.sma_plugins -- sma/plugins.sh@67 -- # killprocess 223637
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 223637 ']'
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 223637
00:15:43.427    11:06:00 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:43.427   11:06:00 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:43.427    11:06:00 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223637
00:15:43.686   11:06:00 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:43.686   11:06:00 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:43.686   11:06:00 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223637'
00:15:43.686  killing process with pid 223637
00:15:43.686   11:06:00 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 223637
00:15:43.686   11:06:00 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 223637
00:15:43.686   11:06:00 sma.sma_plugins -- sma/plugins.sh@80 -- # smapid=224162
00:15:43.686   11:06:00 sma.sma_plugins -- sma/plugins.sh@70 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:43.686    11:06:00 sma.sma_plugins -- sma/plugins.sh@70 -- # cat
00:15:43.686   11:06:00 sma.sma_plugins -- sma/plugins.sh@70 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:43.686   11:06:00 sma.sma_plugins -- sma/plugins.sh@81 -- # sma_waitforlisten
00:15:43.686   11:06:00 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:43.686   11:06:00 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:43.686   11:06:00 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:43.686   11:06:00 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:43.686   11:06:00 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:43.686   11:06:00 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:43.945  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:43.945  I0000 00:00:1733738760.731560  224162 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:44.883   11:06:01 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:44.883   11:06:01 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:44.883   11:06:01 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:44.883   11:06:01 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:44.883    11:06:01 sma.sma_plugins -- sma/plugins.sh@83 -- # create_device nvme
00:15:44.883    11:06:01 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:44.883    11:06:01 sma.sma_plugins -- sma/plugins.sh@83 -- # jq -r .handle
00:15:44.883  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:44.883  I0000 00:00:1733738761.766953  224454 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:44.883  I0000 00:00:1733738761.768629  224454 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:44.883   11:06:01 sma.sma_plugins -- sma/plugins.sh@83 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:44.883    11:06:01 sma.sma_plugins -- sma/plugins.sh@84 -- # create_device nvmf_tcp
00:15:44.883    11:06:01 sma.sma_plugins -- sma/plugins.sh@84 -- # jq -r .handle
00:15:44.883    11:06:01 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:45.141  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:45.141  I0000 00:00:1733738761.986528  224490 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:45.141  I0000 00:00:1733738761.988163  224490 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:45.141   11:06:02 sma.sma_plugins -- sma/plugins.sh@84 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:45.141   11:06:02 sma.sma_plugins -- sma/plugins.sh@86 -- # killprocess 224162
00:15:45.141   11:06:02 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 224162 ']'
00:15:45.141   11:06:02 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 224162
00:15:45.141    11:06:02 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:45.142   11:06:02 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:45.142    11:06:02 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224162
00:15:45.142   11:06:02 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:45.142   11:06:02 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:45.142   11:06:02 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224162'
00:15:45.142  killing process with pid 224162
00:15:45.142   11:06:02 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 224162
00:15:45.142   11:06:02 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 224162
00:15:45.142   11:06:02 sma.sma_plugins -- sma/plugins.sh@99 -- # smapid=224526
00:15:45.142   11:06:02 sma.sma_plugins -- sma/plugins.sh@89 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:45.142   11:06:02 sma.sma_plugins -- sma/plugins.sh@100 -- # sma_waitforlisten
00:15:45.142    11:06:02 sma.sma_plugins -- sma/plugins.sh@89 -- # cat
00:15:45.142   11:06:02 sma.sma_plugins -- sma/plugins.sh@89 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:45.142   11:06:02 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:45.142   11:06:02 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:45.142   11:06:02 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:45.142   11:06:02 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:45.142   11:06:02 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:45.142   11:06:02 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:45.400  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:45.400  I0000 00:00:1733738762.299856  224526 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.335   11:06:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:46.335   11:06:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:46.335   11:06:03 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:46.335   11:06:03 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:46.336    11:06:03 sma.sma_plugins -- sma/plugins.sh@102 -- # create_device nvme
00:15:46.336    11:06:03 sma.sma_plugins -- sma/plugins.sh@102 -- # jq -r .handle
00:15:46.336    11:06:03 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.594  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:46.594  I0000 00:00:1733738763.349563  224762 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.594  I0000 00:00:1733738763.351289  224762 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:46.594   11:06:03 sma.sma_plugins -- sma/plugins.sh@102 -- # [[ nvme:plugin2-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:46.594    11:06:03 sma.sma_plugins -- sma/plugins.sh@103 -- # create_device nvmf_tcp
00:15:46.594    11:06:03 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:46.594    11:06:03 sma.sma_plugins -- sma/plugins.sh@103 -- # jq -r .handle
00:15:46.594  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:46.594  I0000 00:00:1733738763.566020  224794 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:46.594  I0000 00:00:1733738763.567486  224794 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:46.594   11:06:03 sma.sma_plugins -- sma/plugins.sh@103 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:46.594   11:06:03 sma.sma_plugins -- sma/plugins.sh@105 -- # killprocess 224526
00:15:46.594   11:06:03 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 224526 ']'
00:15:46.594   11:06:03 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 224526
00:15:46.594    11:06:03 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:46.594   11:06:03 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:46.594    11:06:03 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224526
00:15:46.852   11:06:03 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:46.852   11:06:03 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:46.852   11:06:03 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224526'
00:15:46.852  killing process with pid 224526
00:15:46.852   11:06:03 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 224526
00:15:46.852   11:06:03 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 224526
00:15:46.852   11:06:03 sma.sma_plugins -- sma/plugins.sh@118 -- # smapid=224823
00:15:46.852   11:06:03 sma.sma_plugins -- sma/plugins.sh@108 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:46.852    11:06:03 sma.sma_plugins -- sma/plugins.sh@108 -- # cat
00:15:46.852   11:06:03 sma.sma_plugins -- sma/plugins.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:46.852   11:06:03 sma.sma_plugins -- sma/plugins.sh@119 -- # sma_waitforlisten
00:15:46.852   11:06:03 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:46.852   11:06:03 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:46.852   11:06:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:46.852   11:06:03 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:46.852   11:06:03 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:46.852   11:06:03 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:47.111  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:47.111  I0000 00:00:1733738763.908303  224823 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:48.046   11:06:04 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:48.046   11:06:04 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:48.046   11:06:04 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:48.046   11:06:04 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:48.046    11:06:04 sma.sma_plugins -- sma/plugins.sh@121 -- # create_device nvme
00:15:48.046    11:06:04 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.046    11:06:04 sma.sma_plugins -- sma/plugins.sh@121 -- # jq -r .handle
00:15:48.046  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:48.046  I0000 00:00:1733738764.930585  225059 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:48.046  I0000 00:00:1733738764.932261  225059 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:48.046   11:06:04 sma.sma_plugins -- sma/plugins.sh@121 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:48.046    11:06:04 sma.sma_plugins -- sma/plugins.sh@122 -- # create_device nvmf_tcp
00:15:48.046    11:06:04 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:48.046    11:06:04 sma.sma_plugins -- sma/plugins.sh@122 -- # jq -r .handle
00:15:48.310  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:48.310  I0000 00:00:1733738765.170647  225113 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:48.310  I0000 00:00:1733738765.172108  225113 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@122 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@124 -- # killprocess 224823
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 224823 ']'
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 224823
00:15:48.310    11:06:05 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:48.310    11:06:05 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 224823
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 224823'
00:15:48.310  killing process with pid 224823
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 224823
00:15:48.310   11:06:05 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 224823
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@134 -- # smapid=225302
00:15:48.310    11:06:05 sma.sma_plugins -- sma/plugins.sh@127 -- # cat
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@135 -- # sma_waitforlisten
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@127 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@127 -- # SMA_PLUGINS=plugin1:plugin2
00:15:48.310   11:06:05 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:48.310   11:06:05 sma.sma_plugins -- sma/plugins.sh@127 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:48.310   11:06:05 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:48.310   11:06:05 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:48.310   11:06:05 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:48.310   11:06:05 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:48.310   11:06:05 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:48.569  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:48.569  I0000 00:00:1733738765.492751  225302 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.504   11:06:06 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:49.504   11:06:06 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:49.504   11:06:06 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:49.504   11:06:06 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:49.504    11:06:06 sma.sma_plugins -- sma/plugins.sh@137 -- # create_device nvme
00:15:49.504    11:06:06 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:49.504    11:06:06 sma.sma_plugins -- sma/plugins.sh@137 -- # jq -r .handle
00:15:49.763  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:49.763  I0000 00:00:1733738766.525549  225690 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.763  I0000 00:00:1733738766.527270  225690 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:49.763   11:06:06 sma.sma_plugins -- sma/plugins.sh@137 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:49.763    11:06:06 sma.sma_plugins -- sma/plugins.sh@138 -- # create_device nvmf_tcp
00:15:49.763    11:06:06 sma.sma_plugins -- sma/plugins.sh@138 -- # jq -r .handle
00:15:49.763    11:06:06 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:49.763  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:49.763  I0000 00:00:1733738766.747689  225805 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:49.763  I0000 00:00:1733738766.749345  225805 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@138 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@140 -- # killprocess 225302
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 225302 ']'
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 225302
00:15:50.021    11:06:06 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:50.021    11:06:06 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225302
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225302'
00:15:50.021  killing process with pid 225302
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 225302
00:15:50.021   11:06:06 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 225302
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@152 -- # smapid=225894
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@143 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@153 -- # sma_waitforlisten
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@143 -- # SMA_PLUGINS=plugin1
00:15:50.021   11:06:06 sma.sma_plugins -- sma/plugins.sh@143 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:50.021    11:06:06 sma.sma_plugins -- sma/plugins.sh@143 -- # cat
00:15:50.021   11:06:06 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:50.021   11:06:06 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:50.021   11:06:06 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:50.021   11:06:06 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:50.021   11:06:06 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:50.021   11:06:06 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:50.279  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:50.279  I0000 00:00:1733738767.116889  225894 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:51.214   11:06:07 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:51.214   11:06:07 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:51.214   11:06:07 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:51.214   11:06:07 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:51.214    11:06:07 sma.sma_plugins -- sma/plugins.sh@155 -- # create_device nvme
00:15:51.214    11:06:07 sma.sma_plugins -- sma/plugins.sh@155 -- # jq -r .handle
00:15:51.214    11:06:07 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:51.214  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:51.214  I0000 00:00:1733738768.126950  226215 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:51.214  I0000 00:00:1733738768.128543  226215 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:51.214   11:06:08 sma.sma_plugins -- sma/plugins.sh@155 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]]
00:15:51.214    11:06:08 sma.sma_plugins -- sma/plugins.sh@156 -- # create_device nvmf_tcp
00:15:51.214    11:06:08 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:51.214    11:06:08 sma.sma_plugins -- sma/plugins.sh@156 -- # jq -r .handle
00:15:51.473  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:51.473  I0000 00:00:1733738768.340979  226244 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:51.473  I0000 00:00:1733738768.342425  226244 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@156 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]]
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@158 -- # killprocess 225894
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 225894 ']'
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 225894
00:15:51.473    11:06:08 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:51.473    11:06:08 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225894
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225894'
00:15:51.473  killing process with pid 225894
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 225894
00:15:51.473   11:06:08 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 225894
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@161 -- # crypto_engines=(crypto-plugin1 crypto-plugin2)
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=226276
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:15:51.473    11:06:08 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:51.473   11:06:08 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:51.473   11:06:08 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:51.473   11:06:08 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:51.473   11:06:08 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:51.473   11:06:08 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:51.473   11:06:08 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:51.473   11:06:08 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:51.731  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:51.731  I0000 00:00:1733738768.665376  226276 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:52.666   11:06:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:52.666   11:06:09 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:52.666   11:06:09 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:52.666   11:06:09 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:52.666    11:06:09 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:15:52.666    11:06:09 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:52.666    11:06:09 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:15:52.924  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:52.924  I0000 00:00:1733738769.707027  226509 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:52.924  I0000 00:00:1733738769.708602  226509 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:52.924   11:06:09 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin1 == nvme:plugin1-device1:crypto-plugin1 ]]
00:15:52.924    11:06:09 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:15:52.924    11:06:09 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:52.924    11:06:09 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:15:53.183  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:53.183  I0000 00:00:1733738769.942573  226635 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:53.183  I0000 00:00:1733738769.944159  226635 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:53.183   11:06:09 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin1 == nvmf_tcp:plugin2-device2:crypto-plugin1 ]]
00:15:53.183   11:06:09 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 226276
00:15:53.183   11:06:09 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 226276 ']'
00:15:53.183   11:06:09 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 226276
00:15:53.183    11:06:09 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:53.183   11:06:09 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:53.183    11:06:09 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226276
00:15:53.183   11:06:10 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:53.183   11:06:10 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:53.183   11:06:10 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226276'
00:15:53.183  killing process with pid 226276
00:15:53.183   11:06:10 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 226276
00:15:53.183   11:06:10 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 226276
00:15:53.183   11:06:10 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}"
00:15:53.183   11:06:10 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=226761
00:15:53.183   11:06:10 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins
00:15:53.183   11:06:10 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten
00:15:53.183   11:06:10 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:53.183    11:06:10 sma.sma_plugins -- sma/plugins.sh@163 -- # cat
00:15:53.183   11:06:10 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:53.183   11:06:10 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080
00:15:53.183   11:06:10 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 ))
00:15:53.183   11:06:10 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:53.183   11:06:10 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:53.183   11:06:10 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s
00:15:53.442  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:53.442  I0000 00:00:1733738770.269703  226761 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:54.378   11:06:11 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ ))
00:15:54.378   11:06:11 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 ))
00:15:54.378   11:06:11 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:54.378   11:06:11 sma.sma_plugins -- sma/common.sh@12 -- # return 0
00:15:54.378    11:06:11 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme
00:15:54.378    11:06:11 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:54.378    11:06:11 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle
00:15:54.378  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:54.378  I0000 00:00:1733738771.302105  226994 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:54.378  I0000 00:00:1733738771.303759  226994 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:54.378   11:06:11 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin2 == nvme:plugin1-device1:crypto-plugin2 ]]
00:15:54.378    11:06:11 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp
00:15:54.378    11:06:11 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle
00:15:54.378    11:06:11 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:54.637  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:54.637  I0000 00:00:1733738771.524069  227019 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:54.637  I0000 00:00:1733738771.525480  227019 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:54.637   11:06:11 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin2 == nvmf_tcp:plugin2-device2:crypto-plugin2 ]]
00:15:54.637   11:06:11 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 226761
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 226761 ']'
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 226761
00:15:54.637    11:06:11 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:54.637    11:06:11 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226761
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=python3
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226761'
00:15:54.637  killing process with pid 226761
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 226761
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 226761
00:15:54.637   11:06:11 sma.sma_plugins -- sma/plugins.sh@184 -- # cleanup
00:15:54.637   11:06:11 sma.sma_plugins -- sma/plugins.sh@13 -- # killprocess 223137
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 223137 ']'
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 223137
00:15:54.637    11:06:11 sma.sma_plugins -- common/autotest_common.sh@959 -- # uname
00:15:54.637   11:06:11 sma.sma_plugins -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:54.637    11:06:11 sma.sma_plugins -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223137
00:15:54.896   11:06:11 sma.sma_plugins -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:54.896   11:06:11 sma.sma_plugins -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:54.896   11:06:11 sma.sma_plugins -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223137'
00:15:54.896  killing process with pid 223137
00:15:54.896   11:06:11 sma.sma_plugins -- common/autotest_common.sh@973 -- # kill 223137
00:15:54.896   11:06:11 sma.sma_plugins -- common/autotest_common.sh@978 -- # wait 223137
00:15:56.798   11:06:13 sma.sma_plugins -- sma/plugins.sh@14 -- # killprocess 226761
00:15:56.798   11:06:13 sma.sma_plugins -- common/autotest_common.sh@954 -- # '[' -z 226761 ']'
00:15:56.798   11:06:13 sma.sma_plugins -- common/autotest_common.sh@958 -- # kill -0 226761
00:15:56.798  /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (226761) - No such process
00:15:56.798   11:06:13 sma.sma_plugins -- common/autotest_common.sh@981 -- # echo 'Process with pid 226761 is not found'
00:15:56.798  Process with pid 226761 is not found
00:15:56.798   11:06:13 sma.sma_plugins -- sma/plugins.sh@185 -- # trap - SIGINT SIGTERM EXIT
00:15:56.798  
00:15:56.799  real	0m17.354s
00:15:56.799  user	0m23.504s
00:15:56.799  sys	0m1.846s
00:15:56.799   11:06:13 sma.sma_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:56.799   11:06:13 sma.sma_plugins -- common/autotest_common.sh@10 -- # set +x
00:15:56.799  ************************************
00:15:56.799  END TEST sma_plugins
00:15:56.799  ************************************
00:15:56.799   11:06:13 sma -- sma/sma.sh@14 -- # run_test sma_discovery /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:15:56.799   11:06:13 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:56.799   11:06:13 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:56.799   11:06:13 sma -- common/autotest_common.sh@10 -- # set +x
00:15:56.799  ************************************
00:15:56.799  START TEST sma_discovery
00:15:56.799  ************************************
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh
00:15:56.799  * Looking for test storage...
00:15:56.799  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:56.799     11:06:13 sma.sma_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:15:56.799     11:06:13 sma.sma_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@344 -- # case "$op" in
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@345 -- # : 1
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@365 -- # decimal 1
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@353 -- # local d=1
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@355 -- # echo 1
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@366 -- # decimal 2
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@353 -- # local d=2
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:56.799     11:06:13 sma.sma_discovery -- scripts/common.sh@355 -- # echo 2
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:56.799    11:06:13 sma.sma_discovery -- scripts/common.sh@368 -- # return 0
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:56.799  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:56.799  		--rc genhtml_branch_coverage=1
00:15:56.799  		--rc genhtml_function_coverage=1
00:15:56.799  		--rc genhtml_legend=1
00:15:56.799  		--rc geninfo_all_blocks=1
00:15:56.799  		--rc geninfo_unexecuted_blocks=1
00:15:56.799  		
00:15:56.799  		'
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:56.799  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:56.799  		--rc genhtml_branch_coverage=1
00:15:56.799  		--rc genhtml_function_coverage=1
00:15:56.799  		--rc genhtml_legend=1
00:15:56.799  		--rc geninfo_all_blocks=1
00:15:56.799  		--rc geninfo_unexecuted_blocks=1
00:15:56.799  		
00:15:56.799  		'
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:56.799  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:56.799  		--rc genhtml_branch_coverage=1
00:15:56.799  		--rc genhtml_function_coverage=1
00:15:56.799  		--rc genhtml_legend=1
00:15:56.799  		--rc geninfo_all_blocks=1
00:15:56.799  		--rc geninfo_unexecuted_blocks=1
00:15:56.799  		
00:15:56.799  		'
00:15:56.799    11:06:13 sma.sma_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:56.799  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:56.799  		--rc genhtml_branch_coverage=1
00:15:56.799  		--rc genhtml_function_coverage=1
00:15:56.799  		--rc genhtml_legend=1
00:15:56.799  		--rc geninfo_all_blocks=1
00:15:56.799  		--rc geninfo_unexecuted_blocks=1
00:15:56.799  		
00:15:56.799  		'
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@12 -- # sma_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@15 -- # t1sock=/var/tmp/spdk.sock1
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@16 -- # t2sock=/var/tmp/spdk.sock2
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@17 -- # invalid_port=8008
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@18 -- # t1dscport=8009
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@19 -- # t2dscport1=8010
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@20 -- # t2dscport2=8011
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@21 -- # t1nqn=nqn.2016-06.io.spdk:node1
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@22 -- # t2nqn=nqn.2016-06.io.spdk:node2
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@24 -- # cleanup_period=1
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@132 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@136 -- # t1pid=227521
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock1 -m 0x1
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@138 -- # t2pid=227522
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@137 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@142 -- # tgtpid=227523
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@141 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x4
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@153 -- # smapid=227524
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@155 -- # waitforlisten 227523
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 227523 ']'
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:56.799   11:06:13 sma.sma_discovery -- sma/discovery.sh@145 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:15:56.799    11:06:13 sma.sma_discovery -- sma/discovery.sh@145 -- # cat
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:56.799  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:56.799   11:06:13 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:56.799  [2024-12-09 11:06:13.794975] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:15:56.799  [2024-12-09 11:06:13.795103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227522 ]
00:15:56.799  [2024-12-09 11:06:13.799531] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:15:56.799  [2024-12-09 11:06:13.799631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227523 ]
00:15:57.058  [2024-12-09 11:06:13.808622] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:15:57.058  [2024-12-09 11:06:13.808728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227521 ]
00:15:57.058  EAL: No free 2048 kB hugepages reported on node 1
00:15:57.058  EAL: No free 2048 kB hugepages reported on node 1
00:15:57.058  EAL: No free 2048 kB hugepages reported on node 1
00:15:57.058  [2024-12-09 11:06:13.923182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:57.058  [2024-12-09 11:06:13.931446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:57.058  [2024-12-09 11:06:13.940596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:57.058  [2024-12-09 11:06:14.040937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:57.058  [2024-12-09 11:06:14.055712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:57.317  [2024-12-09 11:06:14.070729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:58.257  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:58.257  I0000 00:00:1733738774.980969  227524 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:58.257  [2024-12-09 11:06:14.992066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:58.257   11:06:14 sma.sma_discovery -- sma/discovery.sh@156 -- # waitforlisten 227521 /var/tmp/spdk.sock1
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 227521 ']'
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock1
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...'
00:15:58.257  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:58.257   11:06:14 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:58.257   11:06:15 sma.sma_discovery -- sma/discovery.sh@157 -- # waitforlisten 227522 /var/tmp/spdk.sock2
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@835 -- # '[' -z 227522 ']'
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:15:58.257  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:58.257   11:06:15 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:58.516   11:06:15 sma.sma_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:58.516   11:06:15 sma.sma_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:58.516    11:06:15 sma.sma_discovery -- sma/discovery.sh@162 -- # uuidgen
00:15:58.516   11:06:15 sma.sma_discovery -- sma/discovery.sh@162 -- # t1uuid=76d951e8-2f0b-484e-884e-8400765c7bbc
00:15:58.516    11:06:15 sma.sma_discovery -- sma/discovery.sh@163 -- # uuidgen
00:15:58.516   11:06:15 sma.sma_discovery -- sma/discovery.sh@163 -- # t2uuid=a2027337-98d1-42c8-9f23-d41100b7ce72
00:15:58.516    11:06:15 sma.sma_discovery -- sma/discovery.sh@164 -- # uuidgen
00:15:58.516   11:06:15 sma.sma_discovery -- sma/discovery.sh@164 -- # t2uuid2=97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:15:58.516   11:06:15 sma.sma_discovery -- sma/discovery.sh@166 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1
00:15:58.775  [2024-12-09 11:06:15.652406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:58.775  [2024-12-09 11:06:15.692751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:15:58.775  [2024-12-09 11:06:15.700670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:15:58.775  null0
00:15:58.775   11:06:15 sma.sma_discovery -- sma/discovery.sh@176 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:15:59.034  [2024-12-09 11:06:15.916811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:59.034  [2024-12-09 11:06:15.973152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:15:59.034  [2024-12-09 11:06:15.981100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8010 ***
00:15:59.034  [2024-12-09 11:06:15.989127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8011 ***
00:15:59.034  null0
00:15:59.034  null1
00:15:59.034   11:06:16 sma.sma_discovery -- sma/discovery.sh@190 -- # sma_waitforlisten
00:15:59.034   11:06:16 sma.sma_discovery -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:15:59.034   11:06:16 sma.sma_discovery -- sma/common.sh@8 -- # local sma_port=8080
00:15:59.034   11:06:16 sma.sma_discovery -- sma/common.sh@10 -- # (( i = 0 ))
00:15:59.034   11:06:16 sma.sma_discovery -- sma/common.sh@10 -- # (( i < 5 ))
00:15:59.034   11:06:16 sma.sma_discovery -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:15:59.034   11:06:16 sma.sma_discovery -- sma/common.sh@12 -- # return 0
00:15:59.034   11:06:16 sma.sma_discovery -- sma/discovery.sh@192 -- # localnqn=nqn.2016-06.io.spdk:local0
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@195 -- # create_device nqn.2016-06.io.spdk:local0
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@195 -- # jq -r .handle
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:15:59.034    11:06:16 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:59.293  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:59.293  I0000 00:00:1733738776.236391  227980 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:59.293  I0000 00:00:1733738776.238051  227980 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:15:59.293  [2024-12-09 11:06:16.259838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:15:59.293   11:06:16 sma.sma_discovery -- sma/discovery.sh@195 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:59.293   11:06:16 sma.sma_discovery -- sma/discovery.sh@198 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:15:59.553  [
00:15:59.553    {
00:15:59.553      "nqn": "nqn.2016-06.io.spdk:local0",
00:15:59.553      "subtype": "NVMe",
00:15:59.553      "listen_addresses": [
00:15:59.553        {
00:15:59.553          "trtype": "TCP",
00:15:59.553          "adrfam": "IPv4",
00:15:59.553          "traddr": "127.0.0.1",
00:15:59.553          "trsvcid": "4419"
00:15:59.553        }
00:15:59.553      ],
00:15:59.553      "allow_any_host": false,
00:15:59.553      "hosts": [],
00:15:59.553      "serial_number": "00000000000000000000",
00:15:59.553      "model_number": "SPDK bdev Controller",
00:15:59.553      "max_namespaces": 32,
00:15:59.553      "min_cntlid": 1,
00:15:59.553      "max_cntlid": 65519,
00:15:59.553      "namespaces": []
00:15:59.553    }
00:15:59.553  ]
00:15:59.553   11:06:16 sma.sma_discovery -- sma/discovery.sh@201 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc 8009 8010
00:15:59.553   11:06:16 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:15:59.553   11:06:16 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:15:59.553   11:06:16 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:15:59.553    11:06:16 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 76d951e8-2f0b-484e-884e-8400765c7bbc 8009 8010
00:15:59.553    11:06:16 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=76d951e8-2f0b-484e-884e-8400765c7bbc
00:15:59.553    11:06:16 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:15:59.553    11:06:16 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:15:59.553     11:06:16 sma.sma_discovery -- sma/common.sh@20 -- # python
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:15:59.553     11:06:16 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:15:59.813  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:15:59.813  I0000 00:00:1733738776.781057  228010 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:15:59.813  I0000 00:00:1733738776.782656  228010 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:02.347  {}
00:16:02.347    11:06:19 sma.sma_discovery -- sma/discovery.sh@204 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:02.348    11:06:19 sma.sma_discovery -- sma/discovery.sh@204 -- # jq -r '. | length'
00:16:02.348   11:06:19 sma.sma_discovery -- sma/discovery.sh@204 -- # [[ 2 -eq 2 ]]
00:16:02.348   11:06:19 sma.sma_discovery -- sma/discovery.sh@206 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:02.348   11:06:19 sma.sma_discovery -- sma/discovery.sh@206 -- # jq -r '.[].trid.trsvcid'
00:16:02.348   11:06:19 sma.sma_discovery -- sma/discovery.sh@206 -- # grep 8009
00:16:02.605  8009
00:16:02.605   11:06:19 sma.sma_discovery -- sma/discovery.sh@207 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:02.605   11:06:19 sma.sma_discovery -- sma/discovery.sh@207 -- # jq -r '.[].trid.trsvcid'
00:16:02.605   11:06:19 sma.sma_discovery -- sma/discovery.sh@207 -- # grep 8010
00:16:02.864  8010
00:16:02.864    11:06:19 sma.sma_discovery -- sma/discovery.sh@210 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:02.864    11:06:19 sma.sma_discovery -- sma/discovery.sh@210 -- # jq -r '.[].namespaces | length'
00:16:03.124   11:06:19 sma.sma_discovery -- sma/discovery.sh@210 -- # [[ 1 -eq 1 ]]
00:16:03.124    11:06:19 sma.sma_discovery -- sma/discovery.sh@211 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:03.124    11:06:19 sma.sma_discovery -- sma/discovery.sh@211 -- # jq -r '.[].namespaces[0].uuid'
00:16:03.124   11:06:20 sma.sma_discovery -- sma/discovery.sh@211 -- # [[ 76d951e8-2f0b-484e-884e-8400765c7bbc == \7\6\d\9\5\1\e\8\-\2\f\0\b\-\4\8\4\e\-\8\8\4\e\-\8\4\0\0\7\6\5\c\7\b\b\c ]]
00:16:03.124   11:06:20 sma.sma_discovery -- sma/discovery.sh@214 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72 8010
00:16:03.124   11:06:20 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:03.124   11:06:20 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:03.124   11:06:20 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:03.383    11:06:20 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a2027337-98d1-42c8-9f23-d41100b7ce72 8010
00:16:03.383    11:06:20 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:03.383    11:06:20 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:03.383    11:06:20 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:03.383     11:06:20 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:03.383     11:06:20 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:03.642  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:03.642  I0000 00:00:1733738780.394897  228671 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:03.642  I0000 00:00:1733738780.396722  228671 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:03.642  {}
00:16:03.642    11:06:20 sma.sma_discovery -- sma/discovery.sh@217 -- # jq -r '. | length'
00:16:03.642    11:06:20 sma.sma_discovery -- sma/discovery.sh@217 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:03.905   11:06:20 sma.sma_discovery -- sma/discovery.sh@217 -- # [[ 2 -eq 2 ]]
00:16:03.905    11:06:20 sma.sma_discovery -- sma/discovery.sh@218 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:03.905    11:06:20 sma.sma_discovery -- sma/discovery.sh@218 -- # jq -r '.[].namespaces | length'
00:16:03.905   11:06:20 sma.sma_discovery -- sma/discovery.sh@218 -- # [[ 2 -eq 2 ]]
00:16:03.905   11:06:20 sma.sma_discovery -- sma/discovery.sh@219 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:03.905   11:06:20 sma.sma_discovery -- sma/discovery.sh@219 -- # jq -r '.[].namespaces[].uuid'
00:16:03.905   11:06:20 sma.sma_discovery -- sma/discovery.sh@219 -- # grep 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:04.165  76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:04.165   11:06:21 sma.sma_discovery -- sma/discovery.sh@220 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:04.165   11:06:21 sma.sma_discovery -- sma/discovery.sh@220 -- # grep a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:04.165   11:06:21 sma.sma_discovery -- sma/discovery.sh@220 -- # jq -r '.[].namespaces[].uuid'
00:16:04.424  a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:04.424   11:06:21 sma.sma_discovery -- sma/discovery.sh@223 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:04.424   11:06:21 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:04.424    11:06:21 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:04.424    11:06:21 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:04.683  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:04.683  I0000 00:00:1733738781.553276  228916 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:04.683  I0000 00:00:1733738781.555021  228916 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:04.683  {}
00:16:04.683    11:06:21 sma.sma_discovery -- sma/discovery.sh@227 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:04.683    11:06:21 sma.sma_discovery -- sma/discovery.sh@227 -- # jq -r '. | length'
00:16:04.942   11:06:21 sma.sma_discovery -- sma/discovery.sh@227 -- # [[ 1 -eq 1 ]]
00:16:04.942   11:06:21 sma.sma_discovery -- sma/discovery.sh@228 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:04.942   11:06:21 sma.sma_discovery -- sma/discovery.sh@228 -- # jq -r '.[].trid.trsvcid'
00:16:04.942   11:06:21 sma.sma_discovery -- sma/discovery.sh@228 -- # grep 8010
00:16:05.201  8010
00:16:05.201    11:06:22 sma.sma_discovery -- sma/discovery.sh@230 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:05.201    11:06:22 sma.sma_discovery -- sma/discovery.sh@230 -- # jq -r '.[].namespaces | length'
00:16:05.460   11:06:22 sma.sma_discovery -- sma/discovery.sh@230 -- # [[ 1 -eq 1 ]]
00:16:05.460    11:06:22 sma.sma_discovery -- sma/discovery.sh@231 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:05.460    11:06:22 sma.sma_discovery -- sma/discovery.sh@231 -- # jq -r '.[].namespaces[0].uuid'
00:16:05.460   11:06:22 sma.sma_discovery -- sma/discovery.sh@231 -- # [[ a2027337-98d1-42c8-9f23-d41100b7ce72 == \a\2\0\2\7\3\3\7\-\9\8\d\1\-\4\2\c\8\-\9\f\2\3\-\d\4\1\1\0\0\b\7\c\e\7\2 ]]
00:16:05.460   11:06:22 sma.sma_discovery -- sma/discovery.sh@234 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:05.460   11:06:22 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:05.460    11:06:22 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:05.460    11:06:22 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:05.719  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:05.719  I0000 00:00:1733738782.719685  229160 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:05.719  I0000 00:00:1733738782.721291  229160 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:05.978  {}
00:16:05.978    11:06:22 sma.sma_discovery -- sma/discovery.sh@237 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:05.978    11:06:22 sma.sma_discovery -- sma/discovery.sh@237 -- # jq -r '. | length'
00:16:06.237   11:06:23 sma.sma_discovery -- sma/discovery.sh@237 -- # [[ 0 -eq 0 ]]
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@238 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@238 -- # jq -r '.[].namespaces | length'
00:16:06.237   11:06:23 sma.sma_discovery -- sma/discovery.sh@238 -- # [[ 0 -eq 0 ]]
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@241 -- # uuidgen
00:16:06.237   11:06:23 sma.sma_discovery -- sma/discovery.sh@241 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9f2daeed-169f-404e-9d0b-1176cc1657de 8009
00:16:06.237   11:06:23 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:06.237   11:06:23 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9f2daeed-169f-404e-9d0b-1176cc1657de 8009
00:16:06.237   11:06:23 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:06.237   11:06:23 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.237    11:06:23 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:06.237   11:06:23 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:06.237   11:06:23 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 9f2daeed-169f-404e-9d0b-1176cc1657de 8009
00:16:06.237   11:06:23 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:06.237   11:06:23 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:06.237   11:06:23 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 9f2daeed-169f-404e-9d0b-1176cc1657de 8009
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:06.237    11:06:23 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:06.237     11:06:23 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:06.237     11:06:23 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:06.496     11:06:23 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:06.756  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:06.756  I0000 00:00:1733738783.533439  229392 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:06.756  I0000 00:00:1733738783.535202  229392 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:07.692  [2024-12-09 11:06:24.626276] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:07.951  [2024-12-09 11:06:24.726511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:07.951  [2024-12-09 11:06:24.826740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:07.951  [2024-12-09 11:06:24.926973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.210  [2024-12-09 11:06:25.027205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.210  [2024-12-09 11:06:25.127438] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.468  [2024-12-09 11:06:25.227669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.468  [2024-12-09 11:06:25.327902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.468  [2024-12-09 11:06:25.428135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.727  [2024-12-09 11:06:25.528374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.727  [2024-12-09 11:06:25.628605] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 9f2daeed-169f-404e-9d0b-1176cc1657de
00:16:08.727  [2024-12-09 11:06:25.628629] bdev.c:8801:_bdev_open_async: *ERROR*: Timed out while waiting for bdev '9f2daeed-169f-404e-9d0b-1176cc1657de' to appear
00:16:08.727  Traceback (most recent call last):
00:16:08.727    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:08.727      main(sys.argv[1:])
00:16:08.727    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:08.727      result = client.call(request['method'], request.get('params', {}))
00:16:08.727               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.727    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:08.727      response = func(request=json_format.ParseDict(params, input()))
00:16:08.727                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.727    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:08.727      return _end_unary_response_blocking(state, call, False, None)
00:16:08.727             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.727    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:08.727      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:08.727      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:08.727  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:08.727  	status = StatusCode.NOT_FOUND
00:16:08.727  	details = "Volume could not be found"
00:16:08.727  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-09T11:06:25.645735397+01:00", grpc_status:5, grpc_message:"Volume could not be found"}"
00:16:08.727  >
00:16:08.727   11:06:25 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:08.727   11:06:25 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:08.727   11:06:25 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:08.727   11:06:25 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:08.727    11:06:25 sma.sma_discovery -- sma/discovery.sh@242 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:08.727    11:06:25 sma.sma_discovery -- sma/discovery.sh@242 -- # jq -r '. | length'
00:16:08.987   11:06:25 sma.sma_discovery -- sma/discovery.sh@242 -- # [[ 0 -eq 0 ]]
00:16:08.987    11:06:25 sma.sma_discovery -- sma/discovery.sh@243 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:08.987    11:06:25 sma.sma_discovery -- sma/discovery.sh@243 -- # jq -r '.[].namespaces | length'
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@243 -- # [[ 0 -eq 0 ]]
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@246 -- # volumes=($t1uuid $t2uuid)
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc 8009 8010
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:09.246   11:06:26 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:09.246    11:06:26 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 76d951e8-2f0b-484e-884e-8400765c7bbc 8009 8010
00:16:09.246    11:06:26 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:09.246    11:06:26 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:09.246    11:06:26 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:09.246     11:06:26 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:09.246     11:06:26 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:09.505  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:09.505  I0000 00:00:1733738786.370415  229848 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:09.505  I0000 00:00:1733738786.372131  229848 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:12.040  {}
00:16:12.040   11:06:28 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}"
00:16:12.040   11:06:28 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72 8009 8010
00:16:12.040   11:06:28 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:12.040   11:06:28 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:12.040   11:06:28 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:12.040    11:06:28 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a2027337-98d1-42c8-9f23-d41100b7ce72 8009 8010
00:16:12.040    11:06:28 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:12.040    11:06:28 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:12.040    11:06:28 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:12.040     11:06:28 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010')
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:12.040     11:06:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:12.040  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:12.040  I0000 00:00:1733738788.918831  230294 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:12.040  I0000 00:00:1733738788.920396  230294 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:12.040  {}
00:16:12.040    11:06:28 sma.sma_discovery -- sma/discovery.sh@251 -- # jq -r '. | length'
00:16:12.040    11:06:28 sma.sma_discovery -- sma/discovery.sh@251 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:12.300   11:06:29 sma.sma_discovery -- sma/discovery.sh@251 -- # [[ 2 -eq 2 ]]
00:16:12.300   11:06:29 sma.sma_discovery -- sma/discovery.sh@252 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:12.300   11:06:29 sma.sma_discovery -- sma/discovery.sh@252 -- # grep 8009
00:16:12.300   11:06:29 sma.sma_discovery -- sma/discovery.sh@252 -- # jq -r '.[].trid.trsvcid'
00:16:12.558  8009
00:16:12.558   11:06:29 sma.sma_discovery -- sma/discovery.sh@253 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:12.558   11:06:29 sma.sma_discovery -- sma/discovery.sh@253 -- # jq -r '.[].trid.trsvcid'
00:16:12.558   11:06:29 sma.sma_discovery -- sma/discovery.sh@253 -- # grep 8010
00:16:12.816  8010
00:16:12.816   11:06:29 sma.sma_discovery -- sma/discovery.sh@254 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:12.816   11:06:29 sma.sma_discovery -- sma/discovery.sh@254 -- # jq -r '.[].namespaces[].uuid'
00:16:12.816   11:06:29 sma.sma_discovery -- sma/discovery.sh@254 -- # grep 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:13.075  76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:13.075   11:06:29 sma.sma_discovery -- sma/discovery.sh@255 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:13.075   11:06:29 sma.sma_discovery -- sma/discovery.sh@255 -- # jq -r '.[].namespaces[].uuid'
00:16:13.075   11:06:29 sma.sma_discovery -- sma/discovery.sh@255 -- # grep a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:13.075  a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:13.075   11:06:30 sma.sma_discovery -- sma/discovery.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:13.075   11:06:30 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:13.075    11:06:30 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:13.075    11:06:30 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:13.333  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:13.333  I0000 00:00:1733738790.315206  230737 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:13.333  I0000 00:00:1733738790.317069  230737 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:13.333  {}
00:16:13.591    11:06:30 sma.sma_discovery -- sma/discovery.sh@260 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:13.591    11:06:30 sma.sma_discovery -- sma/discovery.sh@260 -- # jq -r '. | length'
00:16:13.591   11:06:30 sma.sma_discovery -- sma/discovery.sh@260 -- # [[ 2 -eq 2 ]]
00:16:13.591   11:06:30 sma.sma_discovery -- sma/discovery.sh@261 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:13.591   11:06:30 sma.sma_discovery -- sma/discovery.sh@261 -- # jq -r '.[].trid.trsvcid'
00:16:13.591   11:06:30 sma.sma_discovery -- sma/discovery.sh@261 -- # grep 8009
00:16:13.850  8009
00:16:13.850   11:06:30 sma.sma_discovery -- sma/discovery.sh@262 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:13.850   11:06:30 sma.sma_discovery -- sma/discovery.sh@262 -- # jq -r '.[].trid.trsvcid'
00:16:13.850   11:06:30 sma.sma_discovery -- sma/discovery.sh@262 -- # grep 8010
00:16:14.109  8010
00:16:14.109   11:06:31 sma.sma_discovery -- sma/discovery.sh@265 -- # NOT delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:14.109   11:06:31 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:14.109   11:06:31 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:14.109   11:06:31 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=delete_device
00:16:14.109   11:06:31 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:14.109    11:06:31 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t delete_device
00:16:14.109   11:06:31 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:14.109   11:06:31 sma.sma_discovery -- common/autotest_common.sh@655 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:14.109   11:06:31 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:14.368  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:14.368  I0000 00:00:1733738791.245697  230791 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:14.368  I0000 00:00:1733738791.247512  230791 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:14.368  Traceback (most recent call last):
00:16:14.368    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:14.368      main(sys.argv[1:])
00:16:14.368    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:14.368      result = client.call(request['method'], request.get('params', {}))
00:16:14.368               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:14.368    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:14.368      response = func(request=json_format.ParseDict(params, input()))
00:16:14.368                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:14.368    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:14.368      return _end_unary_response_blocking(state, call, False, None)
00:16:14.368             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:14.368    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:14.368      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:14.368      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:14.368  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:14.368  	status = StatusCode.FAILED_PRECONDITION
00:16:14.368  	details = "Device has attached volumes"
00:16:14.368  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Device has attached volumes", grpc_status:9, created_time:"2024-12-09T11:06:31.249762017+01:00"}"
00:16:14.368  >
00:16:14.368   11:06:31 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:14.368   11:06:31 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:14.368   11:06:31 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:14.368   11:06:31 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:14.368    11:06:31 sma.sma_discovery -- sma/discovery.sh@267 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:14.368    11:06:31 sma.sma_discovery -- sma/discovery.sh@267 -- # jq -r '. | length'
00:16:14.626   11:06:31 sma.sma_discovery -- sma/discovery.sh@267 -- # [[ 2 -eq 2 ]]
00:16:14.626   11:06:31 sma.sma_discovery -- sma/discovery.sh@268 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:14.626   11:06:31 sma.sma_discovery -- sma/discovery.sh@268 -- # jq -r '.[].trid.trsvcid'
00:16:14.626   11:06:31 sma.sma_discovery -- sma/discovery.sh@268 -- # grep 8009
00:16:14.885  8009
00:16:14.885   11:06:31 sma.sma_discovery -- sma/discovery.sh@269 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:14.885   11:06:31 sma.sma_discovery -- sma/discovery.sh@269 -- # grep 8010
00:16:14.885   11:06:31 sma.sma_discovery -- sma/discovery.sh@269 -- # jq -r '.[].trid.trsvcid'
00:16:15.142  8010
00:16:15.142   11:06:31 sma.sma_discovery -- sma/discovery.sh@272 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:15.142   11:06:31 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:15.142    11:06:31 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:15.142    11:06:31 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:15.142  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:15.142  I0000 00:00:1733738792.132228  231014 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:15.142  I0000 00:00:1733738792.133838  231014 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:15.400  {}
00:16:15.400   11:06:32 sma.sma_discovery -- sma/discovery.sh@273 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:15.400   11:06:32 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:15.683  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:15.683  I0000 00:00:1733738792.427432  231040 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:15.683  I0000 00:00:1733738792.428915  231040 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:15.683  {}
00:16:15.683    11:06:32 sma.sma_discovery -- sma/discovery.sh@275 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:15.683    11:06:32 sma.sma_discovery -- sma/discovery.sh@275 -- # jq -r '. | length'
00:16:15.941   11:06:32 sma.sma_discovery -- sma/discovery.sh@275 -- # [[ 0 -eq 0 ]]
00:16:15.942   11:06:32 sma.sma_discovery -- sma/discovery.sh@276 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:15.942    11:06:32 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:15.942    11:06:32 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]]
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:15.942  [2024-12-09 11:06:32.875994] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:local0' does not exist
00:16:15.942  request:
00:16:15.942  {
00:16:15.942    "nqn": "nqn.2016-06.io.spdk:local0",
00:16:15.942    "method": "nvmf_get_subsystems",
00:16:15.942    "req_id": 1
00:16:15.942  }
00:16:15.942  Got JSON-RPC error response
00:16:15.942  response:
00:16:15.942  {
00:16:15.942    "code": -19,
00:16:15.942    "message": "No such device"
00:16:15.942  }
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:15.942   11:06:32 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@279 -- # create_device nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc 8009
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@279 -- # jq -r .handle
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n 76d951e8-2f0b-484e-884e-8400765c7bbc ]]
00:16:15.942     11:06:32 sma.sma_discovery -- sma/discovery.sh@75 -- # format_volume 76d951e8-2f0b-484e-884e-8400765c7bbc 8009
00:16:15.942     11:06:32 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:15.942     11:06:32 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:15.942     11:06:32 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:15.942      11:06:32 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009')
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:15.942      11:06:32 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@75 -- # volume='"volume": {
00:16:15.942  "volume_id": "dtlR6C8LSE6IToQAdlx7vA==",
00:16:15.942  "nvmf": {
00:16:15.942  "hostnqn": "nqn.2016-06.io.spdk:host0",
00:16:15.942  "discovery": {
00:16:15.942  "discovery_endpoints": [
00:16:15.942  {
00:16:15.942  "trtype": "tcp",
00:16:15.942  "traddr": "127.0.0.1",
00:16:15.942  "trsvcid": "8009"
00:16:15.942  }
00:16:15.942  ]
00:16:15.942  }
00:16:15.942  }
00:16:15.942  },'
00:16:15.942    11:06:32 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:16.200  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:16.200  I0000 00:00:1733738793.144114  231278 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:16.200  I0000 00:00:1733738793.145794  231278 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:17.576  [2024-12-09 11:06:34.255468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:16:17.834   11:06:34 sma.sma_discovery -- sma/discovery.sh@279 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:17.834    11:06:34 sma.sma_discovery -- sma/discovery.sh@282 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:17.834    11:06:34 sma.sma_discovery -- sma/discovery.sh@282 -- # jq -r '. | length'
00:16:18.091   11:06:34 sma.sma_discovery -- sma/discovery.sh@282 -- # [[ 1 -eq 1 ]]
00:16:18.091   11:06:34 sma.sma_discovery -- sma/discovery.sh@283 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:18.091   11:06:34 sma.sma_discovery -- sma/discovery.sh@283 -- # jq -r '.[].trid.trsvcid'
00:16:18.091   11:06:34 sma.sma_discovery -- sma/discovery.sh@283 -- # grep 8009
00:16:18.349  8009
00:16:18.349    11:06:35 sma.sma_discovery -- sma/discovery.sh@284 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:18.349    11:06:35 sma.sma_discovery -- sma/discovery.sh@284 -- # jq -r '.[].namespaces | length'
00:16:18.606   11:06:35 sma.sma_discovery -- sma/discovery.sh@284 -- # [[ 1 -eq 1 ]]
00:16:18.607    11:06:35 sma.sma_discovery -- sma/discovery.sh@285 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:18.607    11:06:35 sma.sma_discovery -- sma/discovery.sh@285 -- # jq -r '.[].namespaces[0].uuid'
00:16:18.607   11:06:35 sma.sma_discovery -- sma/discovery.sh@285 -- # [[ 76d951e8-2f0b-484e-884e-8400765c7bbc == \7\6\d\9\5\1\e\8\-\2\f\0\b\-\4\8\4\e\-\8\8\4\e\-\8\4\0\0\7\6\5\c\7\b\b\c ]]
00:16:18.607   11:06:35 sma.sma_discovery -- sma/discovery.sh@288 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:18.607   11:06:35 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:18.607    11:06:35 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:18.607    11:06:35 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:19.172  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:19.172  I0000 00:00:1733738795.876291  231725 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:19.172  I0000 00:00:1733738795.877815  231725 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:19.172  {}
00:16:19.172    11:06:35 sma.sma_discovery -- sma/discovery.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:19.172    11:06:35 sma.sma_discovery -- sma/discovery.sh@290 -- # jq -r '. | length'
00:16:19.172   11:06:36 sma.sma_discovery -- sma/discovery.sh@290 -- # [[ 0 -eq 0 ]]
00:16:19.172    11:06:36 sma.sma_discovery -- sma/discovery.sh@291 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:19.172    11:06:36 sma.sma_discovery -- sma/discovery.sh@291 -- # jq -r '.[].namespaces | length'
00:16:19.431   11:06:36 sma.sma_discovery -- sma/discovery.sh@291 -- # [[ 0 -eq 0 ]]
00:16:19.431   11:06:36 sma.sma_discovery -- sma/discovery.sh@294 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72 8010 8011
00:16:19.431   11:06:36 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:19.431   11:06:36 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:19.431   11:06:36 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:19.431    11:06:36 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a2027337-98d1-42c8-9f23-d41100b7ce72 8010 8011
00:16:19.431    11:06:36 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:19.431    11:06:36 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:19.431    11:06:36 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:19.431     11:06:36 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 8011
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010' '8011')
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@44 -- # echo ,
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:19.431     11:06:36 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 ))
00:16:19.690  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:19.690  I0000 00:00:1733738796.611602  231959 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:19.690  I0000 00:00:1733738796.613352  231959 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:21.068  {}
00:16:21.068    11:06:37 sma.sma_discovery -- sma/discovery.sh@297 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:21.068    11:06:37 sma.sma_discovery -- sma/discovery.sh@297 -- # jq -r '. | length'
00:16:21.068   11:06:38 sma.sma_discovery -- sma/discovery.sh@297 -- # [[ 1 -eq 1 ]]
00:16:21.068    11:06:38 sma.sma_discovery -- sma/discovery.sh@298 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:21.068    11:06:38 sma.sma_discovery -- sma/discovery.sh@298 -- # jq -r '.[].namespaces | length'
00:16:21.326   11:06:38 sma.sma_discovery -- sma/discovery.sh@298 -- # [[ 1 -eq 1 ]]
00:16:21.326    11:06:38 sma.sma_discovery -- sma/discovery.sh@299 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:21.326    11:06:38 sma.sma_discovery -- sma/discovery.sh@299 -- # jq -r '.[].namespaces[0].uuid'
00:16:21.585   11:06:38 sma.sma_discovery -- sma/discovery.sh@299 -- # [[ a2027337-98d1-42c8-9f23-d41100b7ce72 == \a\2\0\2\7\3\3\7\-\9\8\d\1\-\4\2\c\8\-\9\f\2\3\-\d\4\1\1\0\0\b\7\c\e\7\2 ]]
00:16:21.585   11:06:38 sma.sma_discovery -- sma/discovery.sh@302 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 97c4cd1e-f930-4261-82cf-f4b6d6403cf0 8011
00:16:21.585   11:06:38 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:21.585   11:06:38 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:21.585   11:06:38 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:21.585    11:06:38 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 97c4cd1e-f930-4261-82cf-f4b6d6403cf0 8011
00:16:21.585    11:06:38 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:21.585    11:06:38 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:21.585    11:06:38 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:21.585     11:06:38 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:21.586     11:06:38 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8011
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8011')
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:21.586     11:06:38 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:21.845  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:21.845  I0000 00:00:1733738798.764467  232406 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:21.845  I0000 00:00:1733738798.766203  232406 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:21.845  {}
00:16:21.845    11:06:38 sma.sma_discovery -- sma/discovery.sh@305 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:21.845    11:06:38 sma.sma_discovery -- sma/discovery.sh@305 -- # jq -r '. | length'
00:16:22.103   11:06:39 sma.sma_discovery -- sma/discovery.sh@305 -- # [[ 1 -eq 1 ]]
00:16:22.103    11:06:39 sma.sma_discovery -- sma/discovery.sh@306 -- # jq -r '.[].namespaces | length'
00:16:22.103    11:06:39 sma.sma_discovery -- sma/discovery.sh@306 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:22.361   11:06:39 sma.sma_discovery -- sma/discovery.sh@306 -- # [[ 2 -eq 2 ]]
00:16:22.361   11:06:39 sma.sma_discovery -- sma/discovery.sh@307 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:22.362   11:06:39 sma.sma_discovery -- sma/discovery.sh@307 -- # jq -r '.[].namespaces[].uuid'
00:16:22.362   11:06:39 sma.sma_discovery -- sma/discovery.sh@307 -- # grep a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:22.620  a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:22.620   11:06:39 sma.sma_discovery -- sma/discovery.sh@308 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:22.620   11:06:39 sma.sma_discovery -- sma/discovery.sh@308 -- # jq -r '.[].namespaces[].uuid'
00:16:22.620   11:06:39 sma.sma_discovery -- sma/discovery.sh@308 -- # grep 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:22.878  97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:22.878   11:06:39 sma.sma_discovery -- sma/discovery.sh@311 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:22.878   11:06:39 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:22.878    11:06:39 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:22.878    11:06:39 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:23.137  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:23.137  I0000 00:00:1733738799.945949  232653 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:23.137  I0000 00:00:1733738799.947465  232653 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:23.137  [2024-12-09 11:06:39.950892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:23.137  {}
00:16:23.137   11:06:39 sma.sma_discovery -- sma/discovery.sh@312 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:23.137   11:06:39 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:23.137    11:06:39 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:23.137    11:06:39 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:23.395  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:23.395  I0000 00:00:1733738800.206307  232682 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:23.395  I0000 00:00:1733738800.207942  232682 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:23.395  {}
00:16:23.395   11:06:40 sma.sma_discovery -- sma/discovery.sh@313 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:23.395   11:06:40 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:23.395    11:06:40 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:23.395    11:06:40 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:23.652  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:23.653  I0000 00:00:1733738800.476177  232705 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:23.653  I0000 00:00:1733738800.477611  232705 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:23.653  {}
00:16:23.653   11:06:40 sma.sma_discovery -- sma/discovery.sh@314 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:23.653   11:06:40 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:23.911  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:23.911  I0000 00:00:1733738800.740930  232731 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:23.911  I0000 00:00:1733738800.742533  232731 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:23.911  {}
00:16:23.911    11:06:40 sma.sma_discovery -- sma/discovery.sh@315 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:23.911    11:06:40 sma.sma_discovery -- sma/discovery.sh@315 -- # jq -r '. | length'
00:16:24.170   11:06:41 sma.sma_discovery -- sma/discovery.sh@315 -- # [[ 0 -eq 0 ]]
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@317 -- # create_device nqn.2016-06.io.spdk:local0
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@317 -- # jq -r .handle
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume=
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@73 -- # shift
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]]
00:16:24.170    11:06:41 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:24.448  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:24.448  I0000 00:00:1733738801.211383  232957 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:24.448  I0000 00:00:1733738801.212972  232957 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:24.448  [2024-12-09 11:06:41.235196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 ***
00:16:24.448   11:06:41 sma.sma_discovery -- sma/discovery.sh@317 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:24.448   11:06:41 sma.sma_discovery -- sma/discovery.sh@320 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:24.448    11:06:41 sma.sma_discovery -- sma/discovery.sh@320 -- # uuid2base64 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:24.448    11:06:41 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:24.706  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:24.706  I0000 00:00:1733738801.487746  232978 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:24.706  I0000 00:00:1733738801.489553  232978 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:25.641  {}
00:16:25.899    11:06:42 sma.sma_discovery -- sma/discovery.sh@345 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:25.899    11:06:42 sma.sma_discovery -- sma/discovery.sh@345 -- # jq -r '. | length'
00:16:25.899   11:06:42 sma.sma_discovery -- sma/discovery.sh@345 -- # [[ 1 -eq 1 ]]
00:16:25.899   11:06:42 sma.sma_discovery -- sma/discovery.sh@346 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:25.899   11:06:42 sma.sma_discovery -- sma/discovery.sh@346 -- # jq -r '.[].trid.trsvcid'
00:16:25.899   11:06:42 sma.sma_discovery -- sma/discovery.sh@346 -- # grep 8009
00:16:26.157  8009
00:16:26.157    11:06:43 sma.sma_discovery -- sma/discovery.sh@347 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:26.157    11:06:43 sma.sma_discovery -- sma/discovery.sh@347 -- # jq -r '.[].namespaces | length'
00:16:26.415   11:06:43 sma.sma_discovery -- sma/discovery.sh@347 -- # [[ 1 -eq 1 ]]
00:16:26.415    11:06:43 sma.sma_discovery -- sma/discovery.sh@348 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:26.415    11:06:43 sma.sma_discovery -- sma/discovery.sh@348 -- # jq -r '.[].namespaces[0].uuid'
00:16:26.673   11:06:43 sma.sma_discovery -- sma/discovery.sh@348 -- # [[ 76d951e8-2f0b-484e-884e-8400765c7bbc == \7\6\d\9\5\1\e\8\-\2\f\0\b\-\4\8\4\e\-\8\8\4\e\-\8\4\0\0\7\6\5\c\7\b\b\c ]]
00:16:26.673   11:06:43 sma.sma_discovery -- sma/discovery.sh@351 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.673    11:06:43 sma.sma_discovery -- sma/discovery.sh@351 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:26.673    11:06:43 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:26.673    11:06:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:26.673    11:06:43 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:26.673   11:06:43 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.674   11:06:43 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:26.674   11:06:43 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:26.932  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:26.932  I0000 00:00:1733738803.779962  233431 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:26.932  I0000 00:00:1733738803.781552  233431 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:28.321  Traceback (most recent call last):
00:16:28.321    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:28.321      main(sys.argv[1:])
00:16:28.321    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:28.321      result = client.call(request['method'], request.get('params', {}))
00:16:28.321               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.321    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:28.321      response = func(request=json_format.ParseDict(params, input()))
00:16:28.321                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.321    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:28.321      return _end_unary_response_blocking(state, call, False, None)
00:16:28.321             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.321    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:28.321      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:28.321      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:28.321  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:28.321  	status = StatusCode.INVALID_ARGUMENT
00:16:28.321  	details = "Unexpected subsystem NQN"
00:16:28.321  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Unexpected subsystem NQN", grpc_status:3, created_time:"2024-12-09T11:06:44.896569547+01:00"}"
00:16:28.321  >
00:16:28.321   11:06:44 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:28.321   11:06:44 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:28.321   11:06:44 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:28.321   11:06:44 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:28.321    11:06:44 sma.sma_discovery -- sma/discovery.sh@377 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:28.321    11:06:44 sma.sma_discovery -- sma/discovery.sh@377 -- # jq -r '. | length'
00:16:28.321   11:06:45 sma.sma_discovery -- sma/discovery.sh@377 -- # [[ 1 -eq 1 ]]
00:16:28.322   11:06:45 sma.sma_discovery -- sma/discovery.sh@378 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:28.322   11:06:45 sma.sma_discovery -- sma/discovery.sh@378 -- # jq -r '.[].trid.trsvcid'
00:16:28.322   11:06:45 sma.sma_discovery -- sma/discovery.sh@378 -- # grep 8009
00:16:28.580  8009
00:16:28.580    11:06:45 sma.sma_discovery -- sma/discovery.sh@379 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:28.580    11:06:45 sma.sma_discovery -- sma/discovery.sh@379 -- # jq -r '.[].namespaces | length'
00:16:28.580   11:06:45 sma.sma_discovery -- sma/discovery.sh@379 -- # [[ 1 -eq 1 ]]
00:16:28.580    11:06:45 sma.sma_discovery -- sma/discovery.sh@380 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:28.580    11:06:45 sma.sma_discovery -- sma/discovery.sh@380 -- # jq -r '.[].namespaces[0].uuid'
00:16:28.839   11:06:45 sma.sma_discovery -- sma/discovery.sh@380 -- # [[ 76d951e8-2f0b-484e-884e-8400765c7bbc == \7\6\d\9\5\1\e\8\-\2\f\0\b\-\4\8\4\e\-\8\8\4\e\-\8\4\0\0\7\6\5\c\7\b\b\c ]]
00:16:28.839   11:06:45 sma.sma_discovery -- sma/discovery.sh@383 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.839    11:06:45 sma.sma_discovery -- sma/discovery.sh@383 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:28.839    11:06:45 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:28.839    11:06:45 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:28.839    11:06:45 sma.sma_discovery -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:16:28.839   11:06:45 sma.sma_discovery -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:29.097  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:29.097  I0000 00:00:1733738806.024608  233895 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:29.097  I0000 00:00:1733738806.026282  233895 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:34.365  [2024-12-09 11:06:51.051274] bdev_nvme.c:7604:discovery_poller: *ERROR*: Discovery[127.0.0.1:8010] timed out while attaching NVM ctrlrs
00:16:34.365  Traceback (most recent call last):
00:16:34.365    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:34.365      main(sys.argv[1:])
00:16:34.365    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:34.365      result = client.call(request['method'], request.get('params', {}))
00:16:34.365               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:34.365    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:34.365      response = func(request=json_format.ParseDict(params, input()))
00:16:34.365                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:34.365    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:34.365      return _end_unary_response_blocking(state, call, False, None)
00:16:34.365             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:34.365    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:34.365      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:34.365      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:34.365  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:34.365  	status = StatusCode.INTERNAL
00:16:34.365  	details = "Failed to start discovery"
00:16:34.365  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-12-09T11:06:51.05455306+01:00", grpc_status:13, grpc_message:"Failed to start discovery"}"
00:16:34.365  >
00:16:34.365   11:06:51 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:34.365   11:06:51 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:34.365   11:06:51 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:34.365   11:06:51 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:34.365    11:06:51 sma.sma_discovery -- sma/discovery.sh@408 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:34.365    11:06:51 sma.sma_discovery -- sma/discovery.sh@408 -- # jq -r '. | length'
00:16:34.365   11:06:51 sma.sma_discovery -- sma/discovery.sh@408 -- # [[ 1 -eq 1 ]]
00:16:34.365   11:06:51 sma.sma_discovery -- sma/discovery.sh@409 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:34.365   11:06:51 sma.sma_discovery -- sma/discovery.sh@409 -- # jq -r '.[].trid.trsvcid'
00:16:34.365   11:06:51 sma.sma_discovery -- sma/discovery.sh@409 -- # grep 8009
00:16:34.624  8009
00:16:34.624    11:06:51 sma.sma_discovery -- sma/discovery.sh@410 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:34.624    11:06:51 sma.sma_discovery -- sma/discovery.sh@410 -- # jq -r '.[].namespaces | length'
00:16:34.882   11:06:51 sma.sma_discovery -- sma/discovery.sh@410 -- # [[ 1 -eq 1 ]]
00:16:34.882    11:06:51 sma.sma_discovery -- sma/discovery.sh@411 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:34.882    11:06:51 sma.sma_discovery -- sma/discovery.sh@411 -- # jq -r '.[].namespaces[0].uuid'
00:16:35.141   11:06:52 sma.sma_discovery -- sma/discovery.sh@411 -- # [[ 76d951e8-2f0b-484e-884e-8400765c7bbc == \7\6\d\9\5\1\e\8\-\2\f\0\b\-\4\8\4\e\-\8\8\4\e\-\8\4\0\0\7\6\5\c\7\b\b\c ]]
00:16:35.141    11:06:52 sma.sma_discovery -- sma/discovery.sh@414 -- # uuidgen
00:16:35.141   11:06:52 sma.sma_discovery -- sma/discovery.sh@414 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 783dedab-a197-4d1d-94fa-cccff4e8ee7b 8008
00:16:35.141   11:06:52 sma.sma_discovery -- common/autotest_common.sh@652 -- # local es=0
00:16:35.141   11:06:52 sma.sma_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 783dedab-a197-4d1d-94fa-cccff4e8ee7b 8008
00:16:35.141   11:06:52 sma.sma_discovery -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:16:35.141   11:06:52 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:35.141    11:06:52 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -t attach_volume
00:16:35.141   11:06:52 sma.sma_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:35.141   11:06:52 sma.sma_discovery -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 783dedab-a197-4d1d-94fa-cccff4e8ee7b 8008
00:16:35.141   11:06:52 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:35.141   11:06:52 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:35.141   11:06:52 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:35.141    11:06:52 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 783dedab-a197-4d1d-94fa-cccff4e8ee7b 8008
00:16:35.141    11:06:52 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=783dedab-a197-4d1d-94fa-cccff4e8ee7b
00:16:35.141    11:06:52 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:35.141    11:06:52 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 783dedab-a197-4d1d-94fa-cccff4e8ee7b
00:16:35.142     11:06:52 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8008
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8008')
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:35.142     11:06:52 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:35.400  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:35.400  I0000 00:00:1733738812.351796  234970 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:35.400  I0000 00:00:1733738812.353410  234970 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:36.777  [2024-12-09 11:06:53.368455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:36.777  [2024-12-09 11:06:53.368521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e080 with addr=127.0.0.1, port=8008
00:16:36.777  [2024-12-09 11:06:53.368577] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:36.777  [2024-12-09 11:06:53.368592] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:36.777  [2024-12-09 11:06:53.368605] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:37.711  [2024-12-09 11:06:54.370708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:37.711  [2024-12-09 11:06:54.370756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e300 with addr=127.0.0.1, port=8008
00:16:37.711  [2024-12-09 11:06:54.370827] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:37.711  [2024-12-09 11:06:54.370842] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:37.711  [2024-12-09 11:06:54.370857] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:38.646  [2024-12-09 11:06:55.373027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:38.646  [2024-12-09 11:06:55.373075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e580 with addr=127.0.0.1, port=8008
00:16:38.646  [2024-12-09 11:06:55.373138] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:38.646  [2024-12-09 11:06:55.373152] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:38.646  [2024-12-09 11:06:55.373162] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:39.580  [2024-12-09 11:06:56.375368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:16:39.580  [2024-12-09 11:06:56.375400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500024e800 with addr=127.0.0.1, port=8008
00:16:39.580  [2024-12-09 11:06:56.375462] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:16:39.580  [2024-12-09 11:06:56.375474] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:16:39.580  [2024-12-09 11:06:56.375485] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect
00:16:40.516  [2024-12-09 11:06:57.377545] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] timed out while attaching discovery ctrlr
00:16:40.516  Traceback (most recent call last):
00:16:40.516    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:16:40.516      main(sys.argv[1:])
00:16:40.516    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:16:40.516      result = client.call(request['method'], request.get('params', {}))
00:16:40.516               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:40.516    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:16:40.516      response = func(request=json_format.ParseDict(params, input()))
00:16:40.516                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:40.516    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:16:40.516      return _end_unary_response_blocking(state, call, False, None)
00:16:40.516             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:40.516    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:16:40.516      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:16:40.516      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:16:40.516  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:16:40.516  	status = StatusCode.INTERNAL
00:16:40.516  	details = "Failed to start discovery"
00:16:40.516  	debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-12-09T11:06:57.378398976+01:00"}"
00:16:40.516  >
00:16:40.516   11:06:57 sma.sma_discovery -- common/autotest_common.sh@655 -- # es=1
00:16:40.516   11:06:57 sma.sma_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:40.516   11:06:57 sma.sma_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:40.516   11:06:57 sma.sma_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:40.516    11:06:57 sma.sma_discovery -- sma/discovery.sh@415 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:40.516    11:06:57 sma.sma_discovery -- sma/discovery.sh@415 -- # jq -r '. | length'
00:16:40.775   11:06:57 sma.sma_discovery -- sma/discovery.sh@415 -- # [[ 1 -eq 1 ]]
00:16:40.775   11:06:57 sma.sma_discovery -- sma/discovery.sh@416 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:40.775   11:06:57 sma.sma_discovery -- sma/discovery.sh@416 -- # jq -r '.[].trid.trsvcid'
00:16:40.775   11:06:57 sma.sma_discovery -- sma/discovery.sh@416 -- # grep 8009
00:16:41.034  8009
00:16:41.034   11:06:57 sma.sma_discovery -- sma/discovery.sh@420 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node1 1
00:16:41.293   11:06:58 sma.sma_discovery -- sma/discovery.sh@422 -- # sleep 2
00:16:41.552  WARNING:spdk.sma.volume.volume:Found disconnected volume: 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:43.453    11:07:00 sma.sma_discovery -- sma/discovery.sh@423 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:43.453    11:07:00 sma.sma_discovery -- sma/discovery.sh@423 -- # jq -r '. | length'
00:16:43.453   11:07:00 sma.sma_discovery -- sma/discovery.sh@423 -- # [[ 0 -eq 0 ]]
00:16:43.453   11:07:00 sma.sma_discovery -- sma/discovery.sh@424 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node1 76d951e8-2f0b-484e-884e-8400765c7bbc
00:16:43.710   11:07:00 sma.sma_discovery -- sma/discovery.sh@428 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a2027337-98d1-42c8-9f23-d41100b7ce72 8010
00:16:43.710   11:07:00 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:43.710   11:07:00 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:43.710   11:07:00 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:43.710    11:07:00 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a2027337-98d1-42c8-9f23-d41100b7ce72 8010
00:16:43.710    11:07:00 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:43.710    11:07:00 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:43.710    11:07:00 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:43.711     11:07:00 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:43.711     11:07:00 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:43.968     11:07:00 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:43.968  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:43.969  I0000 00:00:1733738820.962843  236459 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:43.969  I0000 00:00:1733738820.964794  236459 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:45.343  {}
00:16:45.343   11:07:02 sma.sma_discovery -- sma/discovery.sh@429 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 97c4cd1e-f930-4261-82cf-f4b6d6403cf0 8010
00:16:45.343   11:07:02 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:45.343   11:07:02 sma.sma_discovery -- sma/discovery.sh@108 -- # shift
00:16:45.343   11:07:02 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:45.343    11:07:02 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 97c4cd1e-f930-4261-82cf-f4b6d6403cf0 8010
00:16:45.343    11:07:02 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:45.343    11:07:02 sma.sma_discovery -- sma/discovery.sh@51 -- # shift
00:16:45.343    11:07:02 sma.sma_discovery -- sma/discovery.sh@53 -- # cat
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:45.343     11:07:02 sma.sma_discovery -- sma/common.sh@20 -- # python
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010')
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 ))
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@36 -- # cat
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 ))
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ ))
00:16:45.343     11:07:02 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 ))
00:16:45.601  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:45.601  I0000 00:00:1733738822.444105  236714 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:45.601  I0000 00:00:1733738822.445567  236714 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:45.601  {}
00:16:45.601    11:07:02 sma.sma_discovery -- sma/discovery.sh@430 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:45.601    11:07:02 sma.sma_discovery -- sma/discovery.sh@430 -- # jq -r '.[].namespaces | length'
00:16:45.860   11:07:02 sma.sma_discovery -- sma/discovery.sh@430 -- # [[ 2 -eq 2 ]]
00:16:45.860    11:07:02 sma.sma_discovery -- sma/discovery.sh@431 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:45.860    11:07:02 sma.sma_discovery -- sma/discovery.sh@431 -- # jq -r '. | length'
00:16:46.119   11:07:02 sma.sma_discovery -- sma/discovery.sh@431 -- # [[ 1 -eq 1 ]]
00:16:46.119   11:07:02 sma.sma_discovery -- sma/discovery.sh@432 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 2
00:16:46.377   11:07:03 sma.sma_discovery -- sma/discovery.sh@434 -- # sleep 2
00:16:47.313  WARNING:spdk.sma.volume.volume:Found disconnected volume: 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:48.247    11:07:05 sma.sma_discovery -- sma/discovery.sh@436 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:48.247    11:07:05 sma.sma_discovery -- sma/discovery.sh@436 -- # jq -r '.[].namespaces | length'
00:16:48.505   11:07:05 sma.sma_discovery -- sma/discovery.sh@436 -- # [[ 1 -eq 1 ]]
00:16:48.505    11:07:05 sma.sma_discovery -- sma/discovery.sh@437 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:48.505    11:07:05 sma.sma_discovery -- sma/discovery.sh@437 -- # jq -r '. | length'
00:16:48.762   11:07:05 sma.sma_discovery -- sma/discovery.sh@437 -- # [[ 1 -eq 1 ]]
00:16:48.762   11:07:05 sma.sma_discovery -- sma/discovery.sh@438 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 1
00:16:49.020   11:07:05 sma.sma_discovery -- sma/discovery.sh@440 -- # sleep 2
00:16:49.279  WARNING:spdk.sma.volume.volume:Found disconnected volume: a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:51.183    11:07:07 sma.sma_discovery -- sma/discovery.sh@442 -- # jq -r '.[].namespaces | length'
00:16:51.183    11:07:07 sma.sma_discovery -- sma/discovery.sh@442 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0
00:16:51.183   11:07:07 sma.sma_discovery -- sma/discovery.sh@442 -- # [[ 0 -eq 0 ]]
00:16:51.183    11:07:07 sma.sma_discovery -- sma/discovery.sh@443 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info
00:16:51.183    11:07:07 sma.sma_discovery -- sma/discovery.sh@443 -- # jq -r '. | length'
00:16:51.441   11:07:08 sma.sma_discovery -- sma/discovery.sh@443 -- # [[ 0 -eq 0 ]]
00:16:51.441   11:07:08 sma.sma_discovery -- sma/discovery.sh@444 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 a2027337-98d1-42c8-9f23-d41100b7ce72
00:16:51.441   11:07:08 sma.sma_discovery -- sma/discovery.sh@445 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 97c4cd1e-f930-4261-82cf-f4b6d6403cf0
00:16:51.699   11:07:08 sma.sma_discovery -- sma/discovery.sh@447 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0
00:16:51.699   11:07:08 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:16:51.958  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:16:51.958  I0000 00:00:1733738828.837466  237982 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:16:51.958  I0000 00:00:1733738828.839234  237982 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:16:51.958  {}
00:16:51.958   11:07:08 sma.sma_discovery -- sma/discovery.sh@449 -- # cleanup
00:16:51.958   11:07:08 sma.sma_discovery -- sma/discovery.sh@27 -- # killprocess 227524
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 227524 ']'
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 227524
00:16:51.958    11:07:08 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:51.958    11:07:08 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227524
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=python3
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227524'
00:16:51.958  killing process with pid 227524
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 227524
00:16:51.958   11:07:08 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 227524
00:16:52.217   11:07:08 sma.sma_discovery -- sma/discovery.sh@28 -- # killprocess 227523
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 227523 ']'
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 227523
00:16:52.217    11:07:08 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:52.217    11:07:08 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227523
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227523'
00:16:52.217  killing process with pid 227523
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 227523
00:16:52.217   11:07:08 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 227523
00:16:54.121   11:07:10 sma.sma_discovery -- sma/discovery.sh@29 -- # killprocess 227521
00:16:54.121   11:07:10 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 227521 ']'
00:16:54.121   11:07:10 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 227521
00:16:54.121    11:07:10 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:54.121   11:07:10 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:54.121    11:07:10 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227521
00:16:54.122   11:07:10 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:54.122   11:07:10 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:54.122   11:07:10 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227521'
00:16:54.122  killing process with pid 227521
00:16:54.122   11:07:10 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 227521
00:16:54.122   11:07:10 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 227521
00:16:56.021   11:07:12 sma.sma_discovery -- sma/discovery.sh@30 -- # killprocess 227522
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@954 -- # '[' -z 227522 ']'
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@958 -- # kill -0 227522
00:16:56.021    11:07:12 sma.sma_discovery -- common/autotest_common.sh@959 -- # uname
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:56.021    11:07:12 sma.sma_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227522
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227522'
00:16:56.021  killing process with pid 227522
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@973 -- # kill 227522
00:16:56.021   11:07:12 sma.sma_discovery -- common/autotest_common.sh@978 -- # wait 227522
00:16:57.923   11:07:14 sma.sma_discovery -- sma/discovery.sh@450 -- # trap - SIGINT SIGTERM EXIT
00:16:57.923  
00:16:57.923  real	1m1.265s
00:16:57.923  user	3m17.037s
00:16:57.923  sys	0m7.782s
00:16:57.923   11:07:14 sma.sma_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:57.923   11:07:14 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x
00:16:57.923  ************************************
00:16:57.923  END TEST sma_discovery
00:16:57.923  ************************************
00:16:57.923   11:07:14 sma -- sma/sma.sh@15 -- # run_test sma_vhost /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:57.923   11:07:14 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:57.923   11:07:14 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:57.923   11:07:14 sma -- common/autotest_common.sh@10 -- # set +x
00:16:57.923  ************************************
00:16:57.923  START TEST sma_vhost
00:16:57.923  ************************************
00:16:57.923   11:07:14 sma.sma_vhost -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:57.923  * Looking for test storage...
00:16:58.188  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:58.188    11:07:14 sma.sma_vhost -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:58.188     11:07:14 sma.sma_vhost -- common/autotest_common.sh@1711 -- # lcov --version
00:16:58.188     11:07:14 sma.sma_vhost -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:58.188    11:07:14 sma.sma_vhost -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@336 -- # IFS=.-:
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@336 -- # read -ra ver1
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@337 -- # IFS=.-:
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@337 -- # read -ra ver2
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@338 -- # local 'op=<'
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@340 -- # ver1_l=2
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@341 -- # ver2_l=1
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@344 -- # case "$op" in
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@345 -- # : 1
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:58.188     11:07:14 sma.sma_vhost -- scripts/common.sh@365 -- # decimal 1
00:16:58.188     11:07:14 sma.sma_vhost -- scripts/common.sh@353 -- # local d=1
00:16:58.188     11:07:14 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:58.188     11:07:14 sma.sma_vhost -- scripts/common.sh@355 -- # echo 1
00:16:58.188    11:07:14 sma.sma_vhost -- scripts/common.sh@365 -- # ver1[v]=1
00:16:58.188     11:07:15 sma.sma_vhost -- scripts/common.sh@366 -- # decimal 2
00:16:58.188     11:07:15 sma.sma_vhost -- scripts/common.sh@353 -- # local d=2
00:16:58.188     11:07:15 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:58.188     11:07:15 sma.sma_vhost -- scripts/common.sh@355 -- # echo 2
00:16:58.188    11:07:15 sma.sma_vhost -- scripts/common.sh@366 -- # ver2[v]=2
00:16:58.188    11:07:15 sma.sma_vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:58.188    11:07:15 sma.sma_vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:58.188    11:07:15 sma.sma_vhost -- scripts/common.sh@368 -- # return 0
00:16:58.188    11:07:15 sma.sma_vhost -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:58.188    11:07:15 sma.sma_vhost -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:58.188  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:58.188  		--rc genhtml_branch_coverage=1
00:16:58.188  		--rc genhtml_function_coverage=1
00:16:58.188  		--rc genhtml_legend=1
00:16:58.188  		--rc geninfo_all_blocks=1
00:16:58.188  		--rc geninfo_unexecuted_blocks=1
00:16:58.188  		
00:16:58.188  		'
00:16:58.188    11:07:15 sma.sma_vhost -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:58.188  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:58.188  		--rc genhtml_branch_coverage=1
00:16:58.188  		--rc genhtml_function_coverage=1
00:16:58.188  		--rc genhtml_legend=1
00:16:58.188  		--rc geninfo_all_blocks=1
00:16:58.188  		--rc geninfo_unexecuted_blocks=1
00:16:58.188  		
00:16:58.188  		'
00:16:58.188    11:07:15 sma.sma_vhost -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:58.188  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:58.188  		--rc genhtml_branch_coverage=1
00:16:58.188  		--rc genhtml_function_coverage=1
00:16:58.188  		--rc genhtml_legend=1
00:16:58.188  		--rc geninfo_all_blocks=1
00:16:58.188  		--rc geninfo_unexecuted_blocks=1
00:16:58.188  		
00:16:58.188  		'
00:16:58.188    11:07:15 sma.sma_vhost -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:58.188  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:58.188  		--rc genhtml_branch_coverage=1
00:16:58.188  		--rc genhtml_function_coverage=1
00:16:58.188  		--rc genhtml_legend=1
00:16:58.188  		--rc geninfo_all_blocks=1
00:16:58.188  		--rc geninfo_unexecuted_blocks=1
00:16:58.188  		
00:16:58.188  		'
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@6 -- # : false
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@7 -- # : /root/vhost_test
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@9 -- # : qemu-img
00:16:58.188     11:07:15 sma.sma_vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/..
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio
00:16:58.188      11:07:15 sma.sma_vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh
00:16:58.188     11:07:15 sma.sma_vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]'
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@2 -- # vhost_0_main_core=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24
00:16:58.188     11:07:15 sma.sma_vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1
00:16:58.188    11:07:15 sma.sma_vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh
00:16:58.188     11:07:15 sma.sma_vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:16:58.188     11:07:15 sma.sma_vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:16:58.188     11:07:15 sma.sma_vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:16:58.188     11:07:15 sma.sma_vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler
00:16:58.188     11:07:15 sma.sma_vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:16:58.188     11:07:15 sma.sma_vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh
00:16:58.188      11:07:15 sma.sma_vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:16:58.188       11:07:15 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # check_cgroup
00:16:58.188       11:07:15 sma.sma_vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:16:58.188       11:07:15 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:16:58.188       11:07:15 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # echo 2
00:16:58.188      11:07:15 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@49 -- # vm_no=0
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@50 -- # bus_size=32
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@52 -- # timing_enter setup_vm
00:16:58.188   11:07:15 sma.sma_vhost -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:58.188   11:07:15 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.188   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@54 -- # vm_setup --force=0 --disk-type=virtio '--qemu-args=-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@518 -- # xtrace_disable
00:16:58.188   11:07:15 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:16:58.188  INFO: Creating new VM in /root/vhost_test/vms/0
00:16:58.188  INFO: No '--os-mode' parameter provided - using 'snapshot'
00:16:58.188  INFO: TASK MASK: 1-2
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@671 -- # local node_num=0
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@672 -- # local boot_disk_present=false
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@673 -- # notice 'NUMA NODE: 0'
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0'
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0'
00:16:58.188  INFO: NUMA NODE: 0
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@674 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize)
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@675 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind")
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@676 -- # [[ snapshot == snapshot ]]
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@676 -- # cmd+=(-snapshot)
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@677 -- # [[ -n '' ]]
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@678 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait")
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@679 -- # cmd+=(-numa "node,memdev=mem")
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@680 -- # cmd+=(-pidfile "$qemu_pid_file")
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@681 -- # cmd+=(-serial "file:$vm_dir/serial.log")
00:16:58.188   11:07:15 sma.sma_vhost -- vhost/common.sh@682 -- # cmd+=(-D "$vm_dir/qemu.log")
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@683 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios")
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@684 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765")
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@685 -- # cmd+=(-net nic)
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@686 -- # [[ -z '' ]]
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@687 -- # cmd+=(-drive "file=$os,if=none,id=os_disk")
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@688 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0")
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@691 -- # (( 0 == 0 ))
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@691 -- # [[ virtio == virtio* ]]
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@692 -- # disks=("default_virtio.img")
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@698 -- # for disk in "${disks[@]}"
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@701 -- # IFS=,
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@701 -- # read -r disk disk_type _
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@702 -- # [[ -z '' ]]
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@702 -- # disk_type=virtio
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@704 -- # case $disk_type in
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@706 -- # local raw_name=RAWSCSI
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@707 -- # local raw_disk=/root/vhost_test/vms/0/test.img
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@710 -- # [[ -f default_virtio.img ]]
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@714 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img'
00:16:58.189  INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img
00:16:58.189   11:07:15 sma.sma_vhost -- vhost/common.sh@715 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024
00:16:58.756  1024+0 records in
00:16:58.756  1024+0 records out
00:16:58.756  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.447168 s, 2.4 GB/s
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@718 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number")
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@719 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name")
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@720 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache")
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@780 -- # [[ -n '' ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@785 -- # (( 1 ))
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@785 -- # cmd+=("${qemu_args[@]}")
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@786 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh'
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh'
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh'
00:16:58.756  INFO: Saving to /root/vhost_test/vms/0/run.sh
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@787 -- # cat
00:16:58.756    11:07:15 sma.sma_vhost -- vhost/common.sh@787 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1'
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@824 -- # chmod +x /root/vhost_test/vms/0/run.sh
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@827 -- # echo 10000
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@828 -- # echo 10001
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@829 -- # echo 10002
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@831 -- # rm -f /root/vhost_test/vms/0/migration_port
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@832 -- # [[ -z '' ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@834 -- # echo 10004
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@835 -- # echo 100
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@837 -- # [[ -z '' ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@838 -- # [[ -z '' ]]
00:16:58.756   11:07:15 sma.sma_vhost -- sma/vhost_blk.sh@59 -- # vm_run 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@842 -- # local OPTIND optchar vm
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@843 -- # local run_all=false
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@844 -- # local vms_to_run=
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@846 -- # getopts a-: optchar
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@856 -- # false
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@859 -- # shift 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@860 -- # for vm in "$@"
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@861 -- # vm_num_is_valid 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@862 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@866 -- # vms_to_run+=' 0'
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@870 -- # for vm in $vms_to_run
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@871 -- # vm_is_running 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@369 -- # vm_num_is_valid 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@370 -- # local vm_dir=/root/vhost_test/vms/0
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@372 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@373 -- # return 1
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@876 -- # notice 'running /root/vhost_test/vms/0/run.sh'
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh'
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh'
00:16:58.756  INFO: running /root/vhost_test/vms/0/run.sh
00:16:58.756   11:07:15 sma.sma_vhost -- vhost/common.sh@877 -- # /root/vhost_test/vms/0/run.sh
00:16:58.756  Running VM in /root/vhost_test/vms/0
00:16:59.322  Waiting for QEMU pid file
00:17:00.258  === qemu.log ===
00:17:00.258  === qemu.log ===
00:17:00.258   11:07:17 sma.sma_vhost -- sma/vhost_blk.sh@60 -- # vm_wait_for_boot 300 0
00:17:00.258   11:07:17 sma.sma_vhost -- vhost/common.sh@913 -- # assert_number 300
00:17:00.258   11:07:17 sma.sma_vhost -- vhost/common.sh@281 -- # [[ 300 =~ [0-9]+ ]]
00:17:00.258   11:07:17 sma.sma_vhost -- vhost/common.sh@281 -- # return 0
00:17:00.258   11:07:17 sma.sma_vhost -- vhost/common.sh@915 -- # xtrace_disable
00:17:00.258   11:07:17 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:00.258  INFO: Waiting for VMs to boot
00:17:00.258  INFO: waiting for VM0 (/root/vhost_test/vms/0)
00:17:22.188  
00:17:22.188  INFO: VM0 ready
00:17:22.188  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:22.188  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:22.447  INFO: all VMs ready
00:17:22.447   11:07:39 sma.sma_vhost -- vhost/common.sh@973 -- # return 0
00:17:22.447   11:07:39 sma.sma_vhost -- sma/vhost_blk.sh@61 -- # timing_exit setup_vm
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:22.447   11:07:39 sma.sma_vhost -- sma/vhost_blk.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/vhost -S /var/tmp -m 0x3 --wait-for-rpc
00:17:22.447   11:07:39 sma.sma_vhost -- sma/vhost_blk.sh@64 -- # vhostpid=243540
00:17:22.447   11:07:39 sma.sma_vhost -- sma/vhost_blk.sh@66 -- # waitforlisten 243540
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@835 -- # '[' -z 243540 ']'
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:22.447  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:22.447   11:07:39 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:22.447  [2024-12-09 11:07:39.365218] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:17:22.447  [2024-12-09 11:07:39.365322] [ DPDK EAL parameters: vhost --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243540 ]
00:17:22.447  EAL: No free 2048 kB hugepages reported on node 1
00:17:22.705  [2024-12-09 11:07:39.510384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:17:22.705  [2024-12-09 11:07:39.614882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:22.705  [2024-12-09 11:07:39.614897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@868 -- # return 0
00:17:23.272   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@69 -- # rpc_cmd dpdk_cryptodev_scan_accel_module
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.272   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@70 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.272  [2024-12-09 11:07:40.237432] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.272   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@71 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.272  [2024-12-09 11:07:40.245493] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.272   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@72 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.272  [2024-12-09 11:07:40.253466] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:17:23.272   11:07:40 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.272   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@73 -- # rpc_cmd framework_start_init
00:17:23.273   11:07:40 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:23.273   11:07:40 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:23.531  [2024-12-09 11:07:40.439402] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:17:23.790   11:07:40 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:23.790   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@93 -- # smapid=243755
00:17:23.790   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@96 -- # sma_waitforlisten
00:17:23.790   11:07:40 sma.sma_vhost -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:23.790   11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:23.790   11:07:40 sma.sma_vhost -- sma/common.sh@8 -- # local sma_port=8080
00:17:23.790   11:07:40 sma.sma_vhost -- sma/common.sh@10 -- # (( i = 0 ))
00:17:23.790    11:07:40 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # cat
00:17:23.790   11:07:40 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:17:23.790   11:07:40 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:23.790   11:07:40 sma.sma_vhost -- sma/common.sh@14 -- # sleep 1s
00:17:24.049  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:24.049  I0000 00:00:1733738860.841509  243755 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:24.985   11:07:41 sma.sma_vhost -- sma/common.sh@10 -- # (( i++ ))
00:17:24.985   11:07:41 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 ))
00:17:24.985   11:07:41 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:24.985   11:07:41 sma.sma_vhost -- sma/common.sh@12 -- # return 0
00:17:24.985    11:07:41 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:24.985    11:07:41 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:24.985    11:07:41 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:24.985    11:07:41 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:24.985    11:07:41 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:24.985    11:07:41 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:24.985     11:07:41 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:24.985     11:07:41 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:24.985     11:07:41 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:24.985     11:07:41 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:24.985     11:07:41 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:24.985     11:07:41 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:24.985    11:07:41 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:24.985  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:24.985   11:07:41 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # [[ 0 -eq 0 ]]
00:17:24.985   11:07:41 sma.sma_vhost -- sma/vhost_blk.sh@102 -- # rpc_cmd bdev_null_create null0 100 4096
00:17:24.985   11:07:41 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:24.985   11:07:41 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:24.985  null0
00:17:25.243   11:07:41 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.243   11:07:41 sma.sma_vhost -- sma/vhost_blk.sh@103 -- # rpc_cmd bdev_null_create null1 100 4096
00:17:25.243   11:07:41 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:25.243   11:07:41 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:25.243  null1
00:17:25.243   11:07:42 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # jq -r '.[].uuid'
00:17:25.243    11:07:42 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:25.243    11:07:42 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:25.243    11:07:42 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.243   11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # uuid=e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # rpc_cmd bdev_get_bdevs -b null1
00:17:25.243    11:07:42 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # jq -r '.[].uuid'
00:17:25.243    11:07:42 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:25.243    11:07:42 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:25.243   11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # uuid2=8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # create_device 0 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:25.243    11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # jq -r .handle
00:17:25.244     11:07:42 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:25.244     11:07:42 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:25.502  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:25.502  I0000 00:00:1733738862.427237  244004 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:25.502  I0000 00:00:1733738862.429293  244004 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:25.502  I0000 00:00:1733738862.430861  244010 subchannel.cc:806] subchannel 0x5588eb99db20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5588eb988840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5588ebaa2380, grpc.internal.client_channel_call_destination=0x7f398787c390, grpc.internal.event_engine=0x5588eb8b9ca0, grpc.internal.security_connector=0x5588eb9a0850, grpc.internal.subchannel_pool=0x5588eb9a06b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5588eb7e7770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:42.430405864+01:00"}), backing off for 1000 ms
00:17:25.502  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 232
00:17:25.502  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:236
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:237
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:26.879   11:07:43 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # devid0=virtio_blk:sma-0
00:17:26.879   11:07:43 sma.sma_vhost -- sma/vhost_blk.sh@109 -- # rpc_cmd vhost_get_controllers -n sma-0
00:17:26.879   11:07:43 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:26.879   11:07:43 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:26.879  [
00:17:26.879  {
00:17:26.879  "ctrlr": "sma-0",
00:17:26.879  "cpumask": "0x3",
00:17:26.879  "delay_base_us": 0,
00:17:26.879  "iops_threshold": 60000,
00:17:26.879  "socket": "/var/tmp/sma-0",
00:17:26.879  "sessions": [
00:17:26.879  {
00:17:26.879  "vid": 0,
00:17:26.879  "id": 0,
00:17:26.879  "name": "sma-0s0",
00:17:26.879  "started": false,
00:17:26.879  "max_queues": 0,
00:17:26.879  "inflight_task_cnt": 0
00:17:26.879  }
00:17:26.879  ],
00:17:26.879  "backend_specific": {
00:17:26.879  "block": {
00:17:26.879  "readonly": false,
00:17:26.879  "bdev": "null0",
00:17:26.879  "transport": "vhost_user_blk"
00:17:26.879  }
00:17:26.879  }
00:17:26.879  }
00:17:26.879  ]
00:17:26.879   11:07:43 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:26.879    11:07:43 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # create_device 1 8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:26.879    11:07:43 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:26.879    11:07:43 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # jq -r .handle
00:17:26.879     11:07:43 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:26.879     11:07:43 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:26.879  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 238
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:236
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fb34be00000
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7fa4c2400000
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7fa4c2400000
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:239
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:240
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:26.880  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:26.880  I0000 00:00:1733738863.792687  244244 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:26.880  I0000 00:00:1733738863.794391  244244 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:26.880  I0000 00:00:1733738863.795820  244356 subchannel.cc:806] subchannel 0x55f9a74dbb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f9a74c6840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f9a75e0380, grpc.internal.client_channel_call_destination=0x7f7ec0015390, grpc.internal.event_engine=0x55f9a73f7ca0, grpc.internal.security_connector=0x55f9a74de850, grpc.internal.subchannel_pool=0x55f9a74de6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f9a7325770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:43.795345341+01:00"}), backing off for 1000 ms
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-1) vhost-user server: socket created, fd: 243
00:17:26.880  VHOST_CONFIG: (/var/tmp/sma-1) binding succeeded
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) new vhost user connection is 241
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) new device, handle is 1
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Vhost-user protocol features: 0x11ebf
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_QUEUE_NUM
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_OWNER
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:245
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:246
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_CONFIG
00:17:27.818   11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # devid1=virtio_blk:sma-1
00:17:27.818   11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@112 -- # rpc_cmd vhost_get_controllers -n sma-0
00:17:27.818   11:07:44 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:27.818   11:07:44 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:27.818  [
00:17:27.818  {
00:17:27.818  "ctrlr": "sma-0",
00:17:27.818  "cpumask": "0x3",
00:17:27.818  "delay_base_us": 0,
00:17:27.818  "iops_threshold": 60000,
00:17:27.818  "socket": "/var/tmp/sma-0",
00:17:27.818  "sessions": [
00:17:27.818  {
00:17:27.818  "vid": 0,
00:17:27.818  "id": 0,
00:17:27.818  "name": "sma-0s0",
00:17:27.818  "started": true,
00:17:27.818  "max_queues": 2,
00:17:27.818  "inflight_task_cnt": 0
00:17:27.818  }
00:17:27.818  ],
00:17:27.818  "backend_specific": {
00:17:27.818  "block": {
00:17:27.818  "readonly": false,
00:17:27.818  "bdev": "null0",
00:17:27.818  "transport": "vhost_user_blk"
00:17:27.818  }
00:17:27.818  }
00:17:27.818  }
00:17:27.818  ]
00:17:27.818   11:07:44 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:27.818   11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@113 -- # rpc_cmd vhost_get_controllers -n sma-1
00:17:27.818   11:07:44 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:27.818   11:07:44 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:27.818  [
00:17:27.818  {
00:17:27.818  "ctrlr": "sma-1",
00:17:27.818  "cpumask": "0x3",
00:17:27.818  "delay_base_us": 0,
00:17:27.818  "iops_threshold": 60000,
00:17:27.818  "socket": "/var/tmp/sma-1",
00:17:27.818  "sessions": [
00:17:27.818  {
00:17:27.818  "vid": 1,
00:17:27.818  "id": 0,
00:17:27.818  "name": "sma-1s1",
00:17:27.818  "started": false,
00:17:27.818  "max_queues": 0,
00:17:27.818  "inflight_task_cnt": 0
00:17:27.818  }
00:17:27.818  ],
00:17:27.818  "backend_specific": {
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:17:27.818  "block": {
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000008):
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:17:27.818  "readonly": false,
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_INFLIGHT_FD
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd num_queues: 2
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd queue_size: 128
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_size: 4224
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_offset: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) send inflight fd: 242
00:17:27.818  "bdev": "null1",
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_INFLIGHT_FD
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_size: 4224
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_offset: 0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd num_queues: 2
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd queue_size: 128
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd fd: 247
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd pervq_inflight_size: 2112
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:242
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:245
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:17:27.818  "transport": "vhost_user_blk"
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_MEM_TABLE
00:17:27.818  }
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) guest memory region size: 0x40000000
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest physical addr: 0x0
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 guest virtual  addr: 0x7fb34be00000
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 host  virtual  addr: 0x7fa482400000
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap addr : 0x7fa482400000
00:17:27.818  }
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap size : 0x40000000
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap align: 0x200000
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) 	 mmap off  : 0x0
00:17:27.818  }
00:17:27.818  ]
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:27.818   11:07:44 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:0 file:248
00:17:27.818   11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@114 -- # [[ virtio_blk:sma-0 != \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM
00:17:27.818    11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # rpc_cmd vhost_get_controllers
00:17:27.818    11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # jq -r '. | length'
00:17:27.818    11:07:44 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK
00:17:27.818  VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:1 file:249
00:17:27.819    11:07:44 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 0
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 1
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x0000000f):
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 0
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 1
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 1
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 1
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 1
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:17:27.819  VHOST_CONFIG: (/var/tmp/sma-1) virtio is now ready for processing.
00:17:27.819    11:07:44 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:27.819   11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # [[ 2 -eq 2 ]]
00:17:27.819    11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # create_device 0 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:27.819    11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:27.819    11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # jq -r .handle
00:17:27.819     11:07:44 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:27.819     11:07:44 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:28.077  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:28.077  I0000 00:00:1733738864.998319  244485 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:28.077  I0000 00:00:1733738865.000147  244485 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:28.077  I0000 00:00:1733738865.001644  244629 subchannel.cc:806] subchannel 0x563c4c3a0b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563c4c38b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563c4c4a5380, grpc.internal.client_channel_call_destination=0x7f8ff1390390, grpc.internal.event_engine=0x563c4c2bcca0, grpc.internal.security_connector=0x563c4c3a3850, grpc.internal.subchannel_pool=0x563c4c3a36b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563c4c1ea770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:45.00107983+01:00"}), backing off for 1000 ms
00:17:28.077   11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # tmp0=virtio_blk:sma-0
00:17:28.077    11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # create_device 1 8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:28.077    11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:28.077    11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # jq -r .handle
00:17:28.077     11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:28.077     11:07:45 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:28.647  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:28.647  I0000 00:00:1733738865.372905  244703 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:28.647  I0000 00:00:1733738865.374541  244703 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:28.648  I0000 00:00:1733738865.375872  244710 subchannel.cc:806] subchannel 0x55b851a82b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b851a6d840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b851b87380, grpc.internal.client_channel_call_destination=0x7f6419f17390, grpc.internal.event_engine=0x55b85199eca0, grpc.internal.security_connector=0x55b851a85850, grpc.internal.subchannel_pool=0x55b851a856b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b8518cc770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:45.37543737+01:00"}), backing off for 1000 ms
00:17:28.648   11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # tmp1=virtio_blk:sma-1
00:17:28.648   11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # NOT create_device 1 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:28.648   11:07:45 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:28.648   11:07:45 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 1 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:28.648   11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # jq -r .handle
00:17:28.648   11:07:45 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=create_device
00:17:28.648   11:07:45 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:28.648    11:07:45 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t create_device
00:17:28.648   11:07:45 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:28.648   11:07:45 sma.sma_vhost -- common/autotest_common.sh@655 -- # create_device 1 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:28.648   11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:28.648    11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:28.648    11:07:45 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:28.907  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:28.907  I0000 00:00:1733738865.743850  244733 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:28.907  I0000 00:00:1733738865.745406  244733 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:28.907  I0000 00:00:1733738865.746757  244744 subchannel.cc:806] subchannel 0x5574bb27fb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5574bb26a840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5574bb384380, grpc.internal.client_channel_call_destination=0x7f1e22f72390, grpc.internal.event_engine=0x5574bb19bca0, grpc.internal.security_connector=0x5574bb282850, grpc.internal.subchannel_pool=0x5574bb2826b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5574bb0c9770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:45.746235985+01:00"}), backing off for 1000 ms
00:17:28.907  Traceback (most recent call last):
00:17:28.907    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:28.907      main(sys.argv[1:])
00:17:28.907    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:28.907      result = client.call(request['method'], request.get('params', {}))
00:17:28.907               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:28.907    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:28.907      response = func(request=json_format.ParseDict(params, input()))
00:17:28.907                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:28.907    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:28.907      return _end_unary_response_blocking(state, call, False, None)
00:17:28.907             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:28.907    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:28.907      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:28.907      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:28.907  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:28.907  	status = StatusCode.INTERNAL
00:17:28.907  	details = "Failed to create vhost device"
00:17:28.907  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:07:45.798197385+01:00", grpc_status:13, grpc_message:"Failed to create vhost device"}"
00:17:28.907  >
00:17:28.907   11:07:45 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:28.907   11:07:45 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:28.907   11:07:45 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:28.907   11:07:45 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:28.907    11:07:45 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:28.907    11:07:45 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:28.907    11:07:45 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:28.907    11:07:45 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:28.907    11:07:45 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:28.907    11:07:45 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:28.907     11:07:45 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:28.907     11:07:45 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:28.907     11:07:45 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:28.907     11:07:45 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:28.907     11:07:45 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:28.907     11:07:45 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:28.907    11:07:45 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:28.907  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:29.845   11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # [[ 2 -eq 2 ]]
00:17:29.845    11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # jq -r '. | length'
00:17:29.845    11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # rpc_cmd vhost_get_controllers
00:17:29.845    11:07:46 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:29.845    11:07:46 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:29.845    11:07:46 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:29.845   11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # [[ 2 -eq 2 ]]
00:17:29.845   11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@131 -- # [[ virtio_blk:sma-0 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\0 ]]
00:17:29.845   11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@132 -- # [[ virtio_blk:sma-1 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]]
00:17:29.845   11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@135 -- # delete_device virtio_blk:sma-0
00:17:29.845   11:07:46 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:30.104  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:30.104  I0000 00:00:1733738866.908196  244977 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:30.104  I0000 00:00:1733738866.910102  244977 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:30.104  I0000 00:00:1733738866.911422  244978 subchannel.cc:806] subchannel 0x56361c6a1b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56361c68c840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56361c7a6380, grpc.internal.client_channel_call_destination=0x7f7324084390, grpc.internal.event_engine=0x56361c5bdca0, grpc.internal.security_connector=0x56361c6a4850, grpc.internal.subchannel_pool=0x56361c6a46b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56361c4eb770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:46.910982173+01:00"}), backing off for 1000 ms
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:0
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:50
00:17:30.364  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:30.364  {}
00:17:30.364   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@136 -- # NOT rpc_cmd vhost_get_controllers -n sma-0
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-0
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:30.364    11:07:47 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-0
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:30.364  request:
00:17:30.364  {
00:17:30.364  "name": "sma-0",
00:17:30.364  "method": "vhost_get_controllers",
00:17:30.364  "req_id": 1
00:17:30.364  }
00:17:30.364  Got JSON-RPC error response
00:17:30.364  response:
00:17:30.364  {
00:17:30.364  "code": -32603,
00:17:30.364  "message": "No such device"
00:17:30.364  }
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:30.364   11:07:47 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:30.364    11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # rpc_cmd vhost_get_controllers
00:17:30.364    11:07:47 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:30.364    11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # jq -r '. | length'
00:17:30.364    11:07:47 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:30.364    11:07:47 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:30.623   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # [[ 1 -eq 1 ]]
00:17:30.623   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@139 -- # delete_device virtio_blk:sma-1
00:17:30.623   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:30.623  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:30.623  I0000 00:00:1733738867.574131  245149 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:30.623  I0000 00:00:1733738867.575621  245149 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:30.623  I0000 00:00:1733738867.576936  245199 subchannel.cc:806] subchannel 0x56122c177b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56122c162840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56122c27c380, grpc.internal.client_channel_call_destination=0x7ff757cfe390, grpc.internal.event_engine=0x56122c093ca0, grpc.internal.security_connector=0x56122c17a850, grpc.internal.subchannel_pool=0x56122c17a6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56122bfc1770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:47.576443894+01:00"}), backing off for 1000 ms
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000000):
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-RESET: 1
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-ACKNOWLEDGE: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-FEATURES_OK: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-DRIVER_OK: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-DEVICE_NEED_RESET: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) 	-FAILED: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 0
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 1
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 file:3
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE
00:17:30.623  VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 file:47
00:17:30.882  VHOST_CONFIG: (/var/tmp/sma-1) vhost peer closed
00:17:30.882  {}
00:17:30.882   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@140 -- # NOT rpc_cmd vhost_get_controllers -n sma-1
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-1
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:30.882    11:07:47 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@655 -- # rpc_cmd vhost_get_controllers -n sma-1
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:30.882  request:
00:17:30.882  {
00:17:30.882  "name": "sma-1",
00:17:30.882  "method": "vhost_get_controllers",
00:17:30.882  "req_id": 1
00:17:30.882  }
00:17:30.882  Got JSON-RPC error response
00:17:30.882  response:
00:17:30.882  {
00:17:30.882  "code": -32603,
00:17:30.882  "message": "No such device"
00:17:30.882  }
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:30.882   11:07:47 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:30.882    11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # jq -r '. | length'
00:17:30.882    11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # rpc_cmd vhost_get_controllers
00:17:30.882    11:07:47 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:30.882    11:07:47 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:30.882    11:07:47 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:30.882   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # [[ 0 -eq 0 ]]
00:17:30.882   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@144 -- # delete_device virtio_blk:sma-0
00:17:30.882   11:07:47 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:31.140  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:31.140  I0000 00:00:1733738867.993532  245229 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:31.140  I0000 00:00:1733738867.995157  245229 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:31.140  I0000 00:00:1733738867.996326  245230 subchannel.cc:806] subchannel 0x55b06dc20b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b06dc0b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b06dd25380, grpc.internal.client_channel_call_destination=0x7f514b981390, grpc.internal.event_engine=0x55b06db3cca0, grpc.internal.security_connector=0x55b06dc23850, grpc.internal.subchannel_pool=0x55b06dc236b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b06da6a770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:47.995900796+01:00"}), backing off for 999 ms
00:17:31.140  {}
00:17:31.141   11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@145 -- # delete_device virtio_blk:sma-1
00:17:31.141   11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:31.399  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:31.399  I0000 00:00:1733738868.242952  245250 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:31.399  I0000 00:00:1733738868.244507  245250 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:31.399  I0000 00:00:1733738868.245813  245255 subchannel.cc:806] subchannel 0x55c27bd27b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c27bd12840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c27be2c380, grpc.internal.client_channel_call_destination=0x7f519c038390, grpc.internal.event_engine=0x55c27bc43ca0, grpc.internal.security_connector=0x55c27bd2a850, grpc.internal.subchannel_pool=0x55c27bd2a6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c27bb71770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:48.245320939+01:00"}), backing off for 1000 ms
00:17:31.399  {}
00:17:31.399    11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:31.399    11:07:48 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:31.399    11:07:48 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:31.399    11:07:48 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:31.399    11:07:48 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:31.399    11:07:48 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:31.399     11:07:48 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:31.399     11:07:48 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:31.399     11:07:48 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:31.399     11:07:48 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:31.399     11:07:48 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:31.399     11:07:48 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:31.399    11:07:48 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:31.399  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:31.658   11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # [[ 0 -eq 0 ]]
00:17:31.658   11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@150 -- # devids=()
00:17:31.658    11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:31.658    11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # jq -r '.[].uuid'
00:17:31.658    11:07:48 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:31.658    11:07:48 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:31.658    11:07:48 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:31.658   11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # uuid=e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:31.658    11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # create_device 0 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:31.658    11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # jq -r .handle
00:17:31.658    11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:31.658     11:07:48 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:31.658     11:07:48 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:31.917  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:31.917  I0000 00:00:1733738868.825058  245433 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:31.917  I0000 00:00:1733738868.826804  245433 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:31.917  I0000 00:00:1733738868.828201  245492 subchannel.cc:806] subchannel 0x55bbf3e8db20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55bbf3e78840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55bbf3f92380, grpc.internal.client_channel_call_destination=0x7f5d65c3f390, grpc.internal.event_engine=0x55bbf3da9ca0, grpc.internal.security_connector=0x55bbf3e90850, grpc.internal.subchannel_pool=0x55bbf3e906b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55bbf3cd7770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:48.82766808+01:00"}), backing off for 1000 ms
00:17:31.917  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 232
00:17:31.917  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 59
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:236
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:237
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:32.854   11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # devids[0]=virtio_blk:sma-0
00:17:32.854    11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # rpc_cmd bdev_get_bdevs -b null1
00:17:32.854    11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # jq -r '.[].uuid'
00:17:32.854    11:07:49 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.854    11:07:49 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:32.854    11:07:49 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.854   11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # uuid=8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:32.854    11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # create_device 32 8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:32.854    11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # jq -r .handle
00:17:32.854    11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:32.854     11:07:49 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 8f5964e3-e21b-4082-9662-8f79b00be1db
00:17:32.854     11:07:49 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 238
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:236
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fb34be00000
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7fa4c2400000
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7fa4c2400000
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:239
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:240
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:32.854  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:32.855  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:33.113  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:33.113  I0000 00:00:1733738870.081931  245720 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:33.113  I0000 00:00:1733738870.083924  245720 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:33.113  I0000 00:00:1733738870.085380  245727 subchannel.cc:806] subchannel 0x55a15c383b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a15c36e840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a15c488380, grpc.internal.client_channel_call_destination=0x7fe955773390, grpc.internal.event_engine=0x55a15c29fca0, grpc.internal.security_connector=0x55a15c386850, grpc.internal.subchannel_pool=0x55a15c3866b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a15c1cd770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:50.084874299+01:00"}), backing off for 999 ms
00:17:33.113  VHOST_CONFIG: (/var/tmp/sma-32) vhost-user server: socket created, fd: 243
00:17:33.113  VHOST_CONFIG: (/var/tmp/sma-32) binding succeeded
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) new vhost user connection is 241
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) new device, handle is 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Vhost-user protocol features: 0x11ebf
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_QUEUE_NUM
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_OWNER
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:245
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:246
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_CONFIG
00:17:34.049   11:07:50 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # devids[1]=virtio_blk:sma-32
00:17:34.049    11:07:50 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:34.049    11:07:50 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:34.049    11:07:50 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:34.049    11:07:50 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:34.049    11:07:50 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:34.049    11:07:50 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:34.049     11:07:50 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:34.049     11:07:50 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:34.049     11:07:50 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:34.049     11:07:50 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:34.049     11:07:50 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:34.049     11:07:50 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:34.049    11:07:50 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:34.049  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000008):
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_INFLIGHT_FD
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd num_queues: 2
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd queue_size: 128
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_size: 4224
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_offset: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) send inflight fd: 242
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_INFLIGHT_FD
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_size: 4224
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_offset: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd num_queues: 2
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd queue_size: 128
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd fd: 247
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd pervq_inflight_size: 2112
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:242
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:245
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_MEM_TABLE
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) guest memory region size: 0x40000000
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest physical addr: 0x0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 guest virtual  addr: 0x7fb34be00000
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 host  virtual  addr: 0x7fa482400000
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap addr : 0x7fa482400000
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap size : 0x40000000
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap align: 0x200000
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	 mmap off  : 0x0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:0 file:248
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:1 file:249
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x0000000f):
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 1
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:17:34.049  VHOST_CONFIG: (/var/tmp/sma-32) virtio is now ready for processing.
00:17:34.307   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # [[ 2 -eq 2 ]]
00:17:34.307   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:17:34.307   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-0
00:17:34.308   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:34.566  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:34.566  I0000 00:00:1733738871.319979  245954 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:34.566  I0000 00:00:1733738871.321508  245954 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:34.566  I0000 00:00:1733738871.322909  245955 subchannel.cc:806] subchannel 0x561340076b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561340061840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56134017b380, grpc.internal.client_channel_call_destination=0x7f003eb0c390, grpc.internal.event_engine=0x56133ff92ca0, grpc.internal.security_connector=0x561340079850, grpc.internal.subchannel_pool=0x5613400796b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56133fec0770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:51.322339075+01:00"}), backing off for 1000 ms
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:49
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1
00:17:34.566  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:34.566  {}
00:17:34.566   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}"
00:17:34.566   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-32
00:17:34.566   11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:34.825  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:34.825  I0000 00:00:1733738871.681434  245975 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:34.825  I0000 00:00:1733738871.682959  245975 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:34.825  I0000 00:00:1733738871.684132  245978 subchannel.cc:806] subchannel 0x5585796dfb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5585796ca840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5585797e4380, grpc.internal.client_channel_call_destination=0x7f526f742390, grpc.internal.event_engine=0x5585795fbca0, grpc.internal.security_connector=0x5585796e2850, grpc.internal.subchannel_pool=0x5585796e26b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558579529770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:51.683601392+01:00"}), backing off for 1000 ms
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000000):
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-RESET: 1
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-ACKNOWLEDGE: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-FEATURES_OK: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-DRIVER_OK: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-DEVICE_NEED_RESET: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) 	-FAILED: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 1
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 file:0
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 file:50
00:17:34.825  VHOST_CONFIG: (/var/tmp/sma-32) vhost peer closed
00:17:34.825  {}
00:17:35.084    11:07:51 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l'
00:17:35.084    11:07:51 sma.sma_vhost -- vhost/common.sh@336 -- # vm_num_is_valid 0
00:17:35.084    11:07:51 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:35.084    11:07:51 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:35.084    11:07:51 sma.sma_vhost -- vhost/common.sh@338 -- # local vm_num=0
00:17:35.084    11:07:51 sma.sma_vhost -- vhost/common.sh@339 -- # shift
00:17:35.084     11:07:51 sma.sma_vhost -- vhost/common.sh@341 -- # vm_ssh_socket 0
00:17:35.084     11:07:51 sma.sma_vhost -- vhost/common.sh@319 -- # vm_num_is_valid 0
00:17:35.084     11:07:51 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:35.084     11:07:51 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:35.084     11:07:51 sma.sma_vhost -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0
00:17:35.084     11:07:51 sma.sma_vhost -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/ssh_socket
00:17:35.084    11:07:51 sma.sma_vhost -- vhost/common.sh@341 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l'
00:17:35.084  Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts.
00:17:35.342   11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # [[ 0 -eq 0 ]]
00:17:35.342   11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@168 -- # key0=1234567890abcdef1234567890abcdef
00:17:35.342   11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@169 -- # rpc_cmd bdev_malloc_create -b malloc0 32 4096
00:17:35.342   11:07:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.342   11:07:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:35.342  malloc0
00:17:35.342   11:07:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.342    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # rpc_cmd bdev_get_bdevs -b malloc0
00:17:35.342    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # jq -r '.[].uuid'
00:17:35.342    11:07:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.342    11:07:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:35.342    11:07:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.342   11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # uuid=79c2217a-7143-4770-989b-b488ba021bff
00:17:35.342    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r .handle
00:17:35.342    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:35.342     11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # uuid2base64 79c2217a-7143-4770-989b-b488ba021bff
00:17:35.342     11:07:52 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:35.342     11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # get_cipher AES_CBC
00:17:35.342     11:07:52 sma.sma_vhost -- sma/common.sh@27 -- # case "$1" in
00:17:35.342     11:07:52 sma.sma_vhost -- sma/common.sh@28 -- # echo 0
00:17:35.342     11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # format_key 1234567890abcdef1234567890abcdef
00:17:35.342     11:07:52 sma.sma_vhost -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:17:35.342      11:07:52 sma.sma_vhost -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:35.601  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:35.601  I0000 00:00:1733738872.597983  246212 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:35.601  I0000 00:00:1733738872.599759  246212 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:35.601  I0000 00:00:1733738872.601391  246220 subchannel.cc:806] subchannel 0x55d0c01cfb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d0c01ba840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d0c02d4380, grpc.internal.client_channel_call_destination=0x7fde70c24390, grpc.internal.event_engine=0x55d0c00ebca0, grpc.internal.security_connector=0x55d0c01d2850, grpc.internal.subchannel_pool=0x55d0c01d26b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d0c0019770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:52.600881862+01:00"}), backing off for 999 ms
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 60
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:254
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:255
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:35.860  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 58
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 256
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:58
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:254
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fb34be00000
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7fa4c2400000
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7fa4c2400000
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:257
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:258
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:36.120  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:36.120   11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # devid0=virtio_blk:sma-0
00:17:36.120    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # jq -r '. | length'
00:17:36.120    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # rpc_cmd vhost_get_controllers
00:17:36.120    11:07:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.120    11:07:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:36.120    11:07:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.120   11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # [[ 1 -eq 1 ]]
00:17:36.120    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # rpc_cmd vhost_get_controllers
00:17:36.120    11:07:52 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # jq -r '.[].backend_specific.block.bdev'
00:17:36.120    11:07:52 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.120    11:07:52 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:36.120    11:07:52 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.120   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # bdev=f0917c25-69f1-4094-9c7d-f8ba65582cb8
00:17:36.120    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:36.120    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # rpc_cmd bdev_get_bdevs
00:17:36.120    11:07:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.120    11:07:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:36.120    11:07:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.120   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # crypto_bdev='{
00:17:36.120    "name": "f0917c25-69f1-4094-9c7d-f8ba65582cb8",
00:17:36.120    "aliases": [
00:17:36.120      "eef98f3e-314e-5417-9947-94180d5b5724"
00:17:36.120    ],
00:17:36.120    "product_name": "crypto",
00:17:36.120    "block_size": 4096,
00:17:36.120    "num_blocks": 8192,
00:17:36.120    "uuid": "eef98f3e-314e-5417-9947-94180d5b5724",
00:17:36.120    "assigned_rate_limits": {
00:17:36.120      "rw_ios_per_sec": 0,
00:17:36.120      "rw_mbytes_per_sec": 0,
00:17:36.120      "r_mbytes_per_sec": 0,
00:17:36.120      "w_mbytes_per_sec": 0
00:17:36.120    },
00:17:36.120    "claimed": false,
00:17:36.120    "zoned": false,
00:17:36.120    "supported_io_types": {
00:17:36.120      "read": true,
00:17:36.120      "write": true,
00:17:36.120      "unmap": true,
00:17:36.120      "flush": true,
00:17:36.120      "reset": true,
00:17:36.120      "nvme_admin": false,
00:17:36.120      "nvme_io": false,
00:17:36.120      "nvme_io_md": false,
00:17:36.120      "write_zeroes": true,
00:17:36.120      "zcopy": false,
00:17:36.120      "get_zone_info": false,
00:17:36.120      "zone_management": false,
00:17:36.120      "zone_append": false,
00:17:36.120      "compare": false,
00:17:36.120      "compare_and_write": false,
00:17:36.120      "abort": false,
00:17:36.120      "seek_hole": false,
00:17:36.120      "seek_data": false,
00:17:36.120      "copy": false,
00:17:36.120      "nvme_iov_md": false
00:17:36.120    },
00:17:36.120    "memory_domains": [
00:17:36.120      {
00:17:36.120        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:36.120        "dma_device_type": 2
00:17:36.120      }
00:17:36.120    ],
00:17:36.120    "driver_specific": {
00:17:36.120      "crypto": {
00:17:36.120        "base_bdev_name": "malloc0",
00:17:36.120        "name": "f0917c25-69f1-4094-9c7d-f8ba65582cb8",
00:17:36.120        "key_name": "f0917c25-69f1-4094-9c7d-f8ba65582cb8_AES_CBC"
00:17:36.120      }
00:17:36.120    }
00:17:36.120  }'
00:17:36.120    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # jq -r .driver_specific.crypto.name
00:17:36.120   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # [[ f0917c25-69f1-4094-9c7d-f8ba65582cb8 == \f\0\9\1\7\c\2\5\-\6\9\f\1\-\4\0\9\4\-\9\c\7\d\-\f\8\b\a\6\5\5\8\2\c\b\8 ]]
00:17:36.120    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # jq -r .driver_specific.crypto.key_name
00:17:36.120   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # key_name=f0917c25-69f1-4094-9c7d-f8ba65582cb8_AES_CBC
00:17:36.120    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # rpc_cmd accel_crypto_keys_get -k f0917c25-69f1-4094-9c7d-f8ba65582cb8_AES_CBC
00:17:36.120    11:07:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.120    11:07:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:36.120    11:07:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.120   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # key_obj='[
00:17:36.120  {
00:17:36.120  "name": "f0917c25-69f1-4094-9c7d-f8ba65582cb8_AES_CBC",
00:17:36.120  "cipher": "AES_CBC",
00:17:36.120  "key": "1234567890abcdef1234567890abcdef"
00:17:36.120  }
00:17:36.120  ]'
00:17:36.120    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # jq -r '.[0].key'
00:17:36.379   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:36.379    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # jq -r '.[0].cipher'
00:17:36.379   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:36.379   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@205 -- # delete_device virtio_blk:sma-0
00:17:36.379   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:36.638  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:36.638  I0000 00:00:1733738873.412127  246292 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:36.638  I0000 00:00:1733738873.413918  246292 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:36.638  I0000 00:00:1733738873.415217  246461 subchannel.cc:806] subchannel 0x55b285c0fb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b285bfa840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b285d14380, grpc.internal.client_channel_call_destination=0x7f593d0da390, grpc.internal.event_engine=0x55b285b2bca0, grpc.internal.security_connector=0x55b285c12850, grpc.internal.subchannel_pool=0x55b285c126b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b285a59770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:53.414728882+01:00"}), backing off for 999 ms
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:36
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:0
00:17:36.638  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:36.638  {}
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # rpc_cmd bdev_get_bdevs
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:36.898    11:07:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.898    11:07:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r length
00:17:36.898    11:07:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.898   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # [[ '' -eq 0 ]]
00:17:36.898   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@209 -- # device_vhost=2
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # rpc_cmd bdev_get_bdevs -b null0
00:17:36.898    11:07:53 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r '.[].uuid'
00:17:36.898    11:07:53 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:36.898    11:07:53 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.898   11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # uuid=e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # create_device 0 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:36.898    11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # jq -r .handle
00:17:36.898     11:07:53 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:36.898     11:07:53 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:37.157  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:37.157  I0000 00:00:1733738874.042384  246495 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:37.157  I0000 00:00:1733738874.044283  246495 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:37.157  I0000 00:00:1733738874.045624  246507 subchannel.cc:806] subchannel 0x558827e5bb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558827e46840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558827f60380, grpc.internal.client_channel_call_destination=0x7f499c0a3390, grpc.internal.event_engine=0x558827d77ca0, grpc.internal.security_connector=0x558827e5e850, grpc.internal.subchannel_pool=0x558827e5e6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558827ca5770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:54.04520957+01:00"}), backing off for 1000 ms
00:17:37.157  VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252
00:17:37.157  VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 58
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:254
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:255
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG
00:17:37.724   11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # device=virtio_blk:sma-0
00:17:37.724    11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # get_qos_caps 2
00:17:37.724    11:07:54 sma.sma_vhost -- sma/common.sh@45 -- # local rootdir
00:17:37.724   11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # diff /dev/fd/62 /dev/fd/61
00:17:37.724    11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:17:37.724    11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys
00:17:37.724     11:07:54 sma.sma_vhost -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:37.724    11:07:54 sma.sma_vhost -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:17:37.724    11:07:54 sma.sma_vhost -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008):
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 60
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 256
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:60
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:254
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest physical addr: 0x0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 guest virtual  addr: 0x7fb34be00000
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 host  virtual  addr: 0x7fa482200000
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap addr : 0x7fa482200000
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap size : 0x40000000
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap align: 0x200000
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) 	 mmap off  : 0x0
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0.
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:257
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE
00:17:37.724  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0.
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:258
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f):
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 0
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 1
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 1
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 1
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 1
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:37.725  VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing.
00:17:37.983  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:37.983  I0000 00:00:1733738874.841257  246728 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:37.983  I0000 00:00:1733738874.842985  246728 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:37.984  I0000 00:00:1733738874.844285  246737 subchannel.cc:806] subchannel 0x5584ffb621a0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5584ff973480, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5584ffb5b8b0, grpc.internal.client_channel_call_destination=0x7f4166a97390, grpc.internal.event_engine=0x5584ffa2a480, grpc.internal.security_connector=0x5584ffb5b100, grpc.internal.subchannel_pool=0x5584ffa37a00, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5584ff92a320, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:54.843811289+01:00"}), backing off for 999 ms
00:17:37.984   11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:37.984    11:07:54 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # uuid2base64 e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:37.984    11:07:54 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:38.243  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:38.243  I0000 00:00:1733738875.097231  246759 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:38.243  I0000 00:00:1733738875.098937  246759 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:38.243  I0000 00:00:1733738875.100232  246763 subchannel.cc:806] subchannel 0x55e6d80c6b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e6d80b1840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e6d81cb380, grpc.internal.client_channel_call_destination=0x7fbe802fd390, grpc.internal.event_engine=0x55e6d7fe2ca0, grpc.internal.security_connector=0x55e6d80c9850, grpc.internal.subchannel_pool=0x55e6d80c96b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e6d7f10770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:55.099768055+01:00"}), backing off for 999 ms
00:17:38.243  {}
00:17:38.243    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys
00:17:38.243   11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # diff /dev/fd/62 /dev/fd/61
00:17:38.243    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # rpc_cmd bdev_get_bdevs -b e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:38.243    11:07:55 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.243    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:38.243    11:07:55 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:38.243    11:07:55 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.243   11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@264 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.502  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:38.502  I0000 00:00:1733738875.398053  246789 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:38.502  I0000 00:00:1733738875.399522  246789 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:38.502  I0000 00:00:1733738875.400710  246790 subchannel.cc:806] subchannel 0x5580dbf37b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5580dbf22840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5580dc03c380, grpc.internal.client_channel_call_destination=0x7f8050107390, grpc.internal.event_engine=0x5580dbe53ca0, grpc.internal.security_connector=0x5580dbf41df0, grpc.internal.subchannel_pool=0x5580dbf3a6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5580dbd81770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:55.400298593+01:00"}), backing off for 1000 ms
00:17:38.502  {}
00:17:38.502    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # rpc_cmd bdev_get_bdevs -b e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:38.502    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys
00:17:38.502    11:07:55 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.502    11:07:55 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:38.502   11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # diff /dev/fd/62 /dev/fd/61
00:17:38.502    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:38.502    11:07:55 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.502   11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.502     11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuidgen
00:17:38.502    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuid2base64 e56dbb79-fb9a-48b9-9232-954fefd0ab11
00:17:38.502    11:07:55 sma.sma_vhost -- sma/common.sh@20 -- # python
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:38.760    11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:38.760    11:07:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:38.760   11:07:55 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:38.760  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:38.760  I0000 00:00:1733738875.756375  246830 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:38.760  I0000 00:00:1733738875.758228  246830 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:38.760  I0000 00:00:1733738875.760011  247019 subchannel.cc:806] subchannel 0x5631626bfb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5631626aa840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5631627c4380, grpc.internal.client_channel_call_destination=0x7fddee448390, grpc.internal.event_engine=0x5631625dbca0, grpc.internal.security_connector=0x5631626c2850, grpc.internal.subchannel_pool=0x5631626c26b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563162509770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:55.759210952+01:00"}), backing off for 1000 ms
00:17:39.019  [2024-12-09 11:07:55.793699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: e56dbb79-fb9a-48b9-9232-954fefd0ab11
00:17:39.019  Traceback (most recent call last):
00:17:39.019    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:39.019      main(sys.argv[1:])
00:17:39.019    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:39.019      result = client.call(request['method'], request.get('params', {}))
00:17:39.019               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.019    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:39.019      response = func(request=json_format.ParseDict(params, input()))
00:17:39.019                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.019    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:39.019      return _end_unary_response_blocking(state, call, False, None)
00:17:39.019             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.019    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:39.019      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:39.019      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.019  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:39.019  	status = StatusCode.INVALID_ARGUMENT
00:17:39.019  	details = "Specified volume is not attached to the device"
00:17:39.019  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Specified volume is not attached to the device", grpc_status:3, created_time:"2024-12-09T11:07:55.798242288+01:00"}"
00:17:39.019  >
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:39.019   11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.019    11:07:55 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # base64
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@652 -- # local es=0
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.019    11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.019   11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.019    11:07:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.020   11:07:55 sma.sma_vhost -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:39.020   11:07:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.020   11:07:55 sma.sma_vhost -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:17:39.020   11:07:55 sma.sma_vhost -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.020  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:39.020  I0000 00:00:1733738876.027317  247049 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:39.279  I0000 00:00:1733738876.032043  247049 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:39.279  I0000 00:00:1733738876.033398  247053 subchannel.cc:806] subchannel 0x563e7d35eb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563e7d349840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563e7d463380, grpc.internal.client_channel_call_destination=0x7fb57c30a390, grpc.internal.event_engine=0x563e7d27aca0, grpc.internal.security_connector=0x563e7d368df0, grpc.internal.subchannel_pool=0x563e7d3616b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563e7d1a8770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:56.032929082+01:00"}), backing off for 999 ms
00:17:39.279  Traceback (most recent call last):
00:17:39.279    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:39.279      main(sys.argv[1:])
00:17:39.279    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:39.279      result = client.call(request['method'], request.get('params', {}))
00:17:39.279               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.279    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:39.279      response = func(request=json_format.ParseDict(params, input()))
00:17:39.279                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.279    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:39.279      return _end_unary_response_blocking(state, call, False, None)
00:17:39.279             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.279    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:39.279      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:39.279      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:39.279  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:39.279  	status = StatusCode.INVALID_ARGUMENT
00:17:39.279  	details = "Invalid volume uuid"
00:17:39.279  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume uuid", grpc_status:3, created_time:"2024-12-09T11:07:56.042905783+01:00"}"
00:17:39.279  >
00:17:39.279   11:07:56 sma.sma_vhost -- common/autotest_common.sh@655 -- # es=1
00:17:39.279   11:07:56 sma.sma_vhost -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:39.279   11:07:56 sma.sma_vhost -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:39.279   11:07:56 sma.sma_vhost -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:39.279    11:07:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # rpc_cmd bdev_get_bdevs -b e99f2a26-2e79-4fda-b56f-00fe50c0490a
00:17:39.279   11:07:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # diff /dev/fd/62 /dev/fd/61
00:17:39.279    11:07:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys
00:17:39.279    11:07:56 sma.sma_vhost -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:39.279    11:07:56 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys '.[].assigned_rate_limits'
00:17:39.279    11:07:56 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:39.279    11:07:56 sma.sma_vhost -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:39.279   11:07:56 sma.sma_vhost -- sma/vhost_blk.sh@344 -- # delete_device virtio_blk:sma-0
00:17:39.279   11:07:56 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:39.538  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:39.538  I0000 00:00:1733738876.296280  247079 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:39.538  I0000 00:00:1733738876.298067  247079 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:39.538  I0000 00:00:1733738876.299274  247080 subchannel.cc:806] subchannel 0x5581878ddb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5581878c8840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5581879e2380, grpc.internal.client_channel_call_destination=0x7fa73787f390, grpc.internal.event_engine=0x5581877f9ca0, grpc.internal.security_connector=0x5581878e0850, grpc.internal.subchannel_pool=0x5581878e06b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558187727770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:07:56.298843493+01:00"}), backing off for 999 ms
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000):
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-RESET: 1
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-ACKNOWLEDGE: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-FEATURES_OK: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-DRIVER_OK: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-DEVICE_NEED_RESET: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) 	-FAILED: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1
00:17:40.107  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:40.367  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:1
00:17:40.367  VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE
00:17:40.367  VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:49
00:17:40.367  VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed
00:17:40.367  {}
00:17:40.367   11:07:57 sma.sma_vhost -- sma/vhost_blk.sh@346 -- # cleanup
00:17:40.367   11:07:57 sma.sma_vhost -- sma/vhost_blk.sh@14 -- # killprocess 243540
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 243540 ']'
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 243540
00:17:40.367    11:07:57 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:40.367    11:07:57 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 243540
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 243540'
00:17:40.367  killing process with pid 243540
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 243540
00:17:40.367   11:07:57 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 243540
00:17:41.304   11:07:58 sma.sma_vhost -- sma/vhost_blk.sh@15 -- # killprocess 243755
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@954 -- # '[' -z 243755 ']'
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@958 -- # kill -0 243755
00:17:41.304    11:07:58 sma.sma_vhost -- common/autotest_common.sh@959 -- # uname
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:41.304    11:07:58 sma.sma_vhost -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 243755
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@960 -- # process_name=python3
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@972 -- # echo 'killing process with pid 243755'
00:17:41.304  killing process with pid 243755
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@973 -- # kill 243755
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@978 -- # wait 243755
00:17:41.304   11:07:58 sma.sma_vhost -- sma/vhost_blk.sh@16 -- # vm_kill_all
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@476 -- # local vm
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@477 -- # vm_list_all
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@466 -- # vms=()
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@466 -- # local vms
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@467 -- # vms=("$VM_DIR"/+([0-9]))
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@468 -- # (( 1 > 0 ))
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@469 -- # basename --multiple /root/vhost_test/vms/0
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@477 -- # for vm in $(vm_list_all)
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@478 -- # vm_kill 0
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@442 -- # vm_num_is_valid 0
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@309 -- # [[ 0 =~ ^[0-9]+$ ]]
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@309 -- # return 0
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@443 -- # local vm_dir=/root/vhost_test/vms/0
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@445 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]]
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@449 -- # local vm_pid
00:17:41.304    11:07:58 sma.sma_vhost -- vhost/common.sh@450 -- # cat /root/vhost_test/vms/0/qemu.pid
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@450 -- # vm_pid=239364
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@452 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=239364)'
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=239364)'
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=239364)'
00:17:41.304  INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=239364)
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@454 -- # /bin/kill 239364
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@455 -- # notice 'process 239364 killed'
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'process 239364 killed'
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@61 -- # false
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out=
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@70 -- # shift
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: process 239364 killed'
00:17:41.304  INFO: process 239364 killed
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@456 -- # rm -rf /root/vhost_test/vms/0
00:17:41.304   11:07:58 sma.sma_vhost -- vhost/common.sh@481 -- # rm -rf /root/vhost_test/vms
00:17:41.304   11:07:58 sma.sma_vhost -- sma/vhost_blk.sh@347 -- # trap - SIGINT SIGTERM EXIT
00:17:41.304  
00:17:41.304  real	0m43.382s
00:17:41.304  user	0m44.223s
00:17:41.304  sys	0m2.621s
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:41.304   11:07:58 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x
00:17:41.304  ************************************
00:17:41.304  END TEST sma_vhost
00:17:41.304  ************************************
00:17:41.304   11:07:58 sma -- sma/sma.sh@16 -- # run_test sma_crypto /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:17:41.304   11:07:58 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:41.304   11:07:58 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:41.304   11:07:58 sma -- common/autotest_common.sh@10 -- # set +x
00:17:41.304  ************************************
00:17:41.304  START TEST sma_crypto
00:17:41.304  ************************************
00:17:41.304   11:07:58 sma.sma_crypto -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh
00:17:41.563  * Looking for test storage...
00:17:41.563  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:17:41.563    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:41.563     11:07:58 sma.sma_crypto -- common/autotest_common.sh@1711 -- # lcov --version
00:17:41.563     11:07:58 sma.sma_crypto -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:41.563    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@336 -- # IFS=.-:
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@336 -- # read -ra ver1
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@337 -- # IFS=.-:
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@337 -- # read -ra ver2
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@338 -- # local 'op=<'
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@340 -- # ver1_l=2
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@341 -- # ver2_l=1
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@344 -- # case "$op" in
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@345 -- # : 1
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@365 -- # decimal 1
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@353 -- # local d=1
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@355 -- # echo 1
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@365 -- # ver1[v]=1
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@366 -- # decimal 2
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@353 -- # local d=2
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:41.563     11:07:58 sma.sma_crypto -- scripts/common.sh@355 -- # echo 2
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@366 -- # ver2[v]=2
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:41.563    11:07:58 sma.sma_crypto -- scripts/common.sh@368 -- # return 0
00:17:41.563    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:41.563    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:41.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.563  		--rc genhtml_branch_coverage=1
00:17:41.563  		--rc genhtml_function_coverage=1
00:17:41.563  		--rc genhtml_legend=1
00:17:41.563  		--rc geninfo_all_blocks=1
00:17:41.563  		--rc geninfo_unexecuted_blocks=1
00:17:41.563  		
00:17:41.563  		'
00:17:41.563    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:41.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.563  		--rc genhtml_branch_coverage=1
00:17:41.563  		--rc genhtml_function_coverage=1
00:17:41.563  		--rc genhtml_legend=1
00:17:41.563  		--rc geninfo_all_blocks=1
00:17:41.563  		--rc geninfo_unexecuted_blocks=1
00:17:41.563  		
00:17:41.563  		'
00:17:41.563    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:41.563  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.563  		--rc genhtml_branch_coverage=1
00:17:41.563  		--rc genhtml_function_coverage=1
00:17:41.563  		--rc genhtml_legend=1
00:17:41.563  		--rc geninfo_all_blocks=1
00:17:41.564  		--rc geninfo_unexecuted_blocks=1
00:17:41.564  		
00:17:41.564  		'
00:17:41.564    11:07:58 sma.sma_crypto -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:41.564  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:41.564  		--rc genhtml_branch_coverage=1
00:17:41.564  		--rc genhtml_function_coverage=1
00:17:41.564  		--rc genhtml_legend=1
00:17:41.564  		--rc geninfo_all_blocks=1
00:17:41.564  		--rc geninfo_unexecuted_blocks=1
00:17:41.564  		
00:17:41.564  		'
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@14 -- # localnqn=nqn.2016-06.io.spdk:cnode0
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@15 -- # tgtnqn=nqn.2016-06.io.spdk:tgt0
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@16 -- # key0=1234567890abcdef1234567890abcdef
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@17 -- # key1=deadbeefcafebabefeedbeefbabecafe
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@18 -- # tgtsock=/var/tmp/spdk.sock2
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@19 -- # discovery_port=8009
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@145 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --wait-for-rpc
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@148 -- # hostpid=247579
00:17:41.564   11:07:58 sma.sma_crypto -- sma/crypto.sh@150 -- # waitforlisten 247579
00:17:41.564   11:07:58 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 247579 ']'
00:17:41.564   11:07:58 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:41.564   11:07:58 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:41.564   11:07:58 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:41.564  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:41.564   11:07:58 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:41.564   11:07:58 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:41.564  [2024-12-09 11:07:58.507394] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:17:41.564  [2024-12-09 11:07:58.507499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247579 ]
00:17:41.564  EAL: No free 2048 kB hugepages reported on node 1
00:17:41.823  [2024-12-09 11:07:58.624110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:41.823  [2024-12-09 11:07:58.722962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:42.390   11:07:59 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:42.390   11:07:59 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:17:42.390   11:07:59 sma.sma_crypto -- sma/crypto.sh@153 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py dpdk_cryptodev_scan_accel_module
00:17:42.649   11:07:59 sma.sma_crypto -- sma/crypto.sh@154 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb
00:17:42.649   11:07:59 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:42.649   11:07:59 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:42.649  [2024-12-09 11:07:59.569664] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb
00:17:42.649   11:07:59 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:42.649   11:07:59 sma.sma_crypto -- sma/crypto.sh@155 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o encrypt -m dpdk_cryptodev
00:17:42.907  [2024-12-09 11:07:59.754128] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev
00:17:42.907   11:07:59 sma.sma_crypto -- sma/crypto.sh@156 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o decrypt -m dpdk_cryptodev
00:17:43.165  [2024-12-09 11:07:59.958652] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev
00:17:43.165   11:07:59 sma.sma_crypto -- sma/crypto.sh@157 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:17:43.423  [2024-12-09 11:08:00.386804] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1
00:17:43.990   11:08:00 sma.sma_crypto -- sma/crypto.sh@160 -- # tgtpid=248004
00:17:43.990   11:08:00 sma.sma_crypto -- sma/crypto.sh@159 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2
00:17:43.990   11:08:00 sma.sma_crypto -- sma/crypto.sh@172 -- # smapid=248005
00:17:43.990   11:08:00 sma.sma_crypto -- sma/crypto.sh@175 -- # sma_waitforlisten
00:17:43.990   11:08:00 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:43.990   11:08:00 sma.sma_crypto -- sma/crypto.sh@162 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:43.990    11:08:00 sma.sma_crypto -- sma/crypto.sh@162 -- # cat
00:17:43.990   11:08:00 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:17:43.990   11:08:00 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:17:43.990   11:08:00 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:43.990   11:08:00 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:43.990   11:08:00 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:17:44.248  [2024-12-09 11:08:01.065719] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:17:44.248  [2024-12-09 11:08:01.065861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248004 ]
00:17:44.248  EAL: No free 2048 kB hugepages reported on node 1
00:17:44.248  [2024-12-09 11:08:01.203319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:44.248  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:44.248  I0000 00:00:1733738881.207044  248005 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:44.248  [2024-12-09 11:08:01.220933] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:44.507  [2024-12-09 11:08:01.327798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:45.074   11:08:01 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:17:45.074   11:08:01 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:45.074   11:08:01 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:45.074   11:08:02 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:17:45.074    11:08:02 sma.sma_crypto -- sma/crypto.sh@178 -- # uuidgen
00:17:45.074   11:08:02 sma.sma_crypto -- sma/crypto.sh@178 -- # uuid=45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:45.074   11:08:02 sma.sma_crypto -- sma/crypto.sh@179 -- # waitforlisten 248004 /var/tmp/spdk.sock2
00:17:45.074   11:08:02 sma.sma_crypto -- common/autotest_common.sh@835 -- # '[' -z 248004 ']'
00:17:45.074   11:08:02 sma.sma_crypto -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock2
00:17:45.074   11:08:02 sma.sma_crypto -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:45.074   11:08:02 sma.sma_crypto -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...'
00:17:45.074  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...
00:17:45.074   11:08:02 sma.sma_crypto -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:45.074   11:08:02 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:45.333   11:08:02 sma.sma_crypto -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:45.333   11:08:02 sma.sma_crypto -- common/autotest_common.sh@868 -- # return 0
00:17:45.333   11:08:02 sma.sma_crypto -- sma/crypto.sh@180 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2
00:17:45.591  [2024-12-09 11:08:02.471376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:45.591  [2024-12-09 11:08:02.487685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 ***
00:17:45.591  [2024-12-09 11:08:02.495538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 ***
00:17:45.591  malloc0
00:17:45.591    11:08:02 sma.sma_crypto -- sma/crypto.sh@190 -- # jq -r .handle
00:17:45.591    11:08:02 sma.sma_crypto -- sma/crypto.sh@190 -- # create_device
00:17:45.591    11:08:02 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:45.850  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:45.850  I0000 00:00:1733738882.727538  248250 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:45.850  I0000 00:00:1733738882.729294  248250 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:45.850  I0000 00:00:1733738882.730669  248258 subchannel.cc:806] subchannel 0x563cbc944b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563cbc92f840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563cbca49380, grpc.internal.client_channel_call_destination=0x7fcbd98c9390, grpc.internal.event_engine=0x563cbc860ca0, grpc.internal.security_connector=0x563cbc947850, grpc.internal.subchannel_pool=0x563cbc9476b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563cbc78e770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:02.730209549+01:00"}), backing off for 1000 ms
00:17:45.850  [2024-12-09 11:08:02.749594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:45.850   11:08:02 sma.sma_crypto -- sma/crypto.sh@190 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:45.850   11:08:02 sma.sma_crypto -- sma/crypto.sh@193 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:45.850   11:08:02 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:45.850   11:08:02 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:45.850   11:08:02 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher= key= key2= config
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:45.850     11:08:02 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:45.850      11:08:02 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:45.850      11:08:02 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:45.850  "nvmf": {
00:17:45.850    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:45.850    "discovery": {
00:17:45.850      "discovery_endpoints": [
00:17:45.850        {
00:17:45.850          "trtype": "tcp",
00:17:45.850          "traddr": "127.0.0.1",
00:17:45.850          "trsvcid": "8009"
00:17:45.850        }
00:17:45.850      ]
00:17:45.850    }
00:17:45.850  }'
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n '' ]]
00:17:45.850    11:08:02 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:46.108  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:46.108  I0000 00:00:1733738883.040520  248333 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:46.108  I0000 00:00:1733738883.042249  248333 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:46.108  I0000 00:00:1733738883.043755  248477 subchannel.cc:806] subchannel 0x556691fd2b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556691fbd840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5566920d7380, grpc.internal.client_channel_call_destination=0x7f750e6cf390, grpc.internal.event_engine=0x556691eeeca0, grpc.internal.security_connector=0x556691fd5850, grpc.internal.subchannel_pool=0x556691fd56b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x556691e1c770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:03.043195457+01:00"}), backing off for 1000 ms
00:17:47.484  {}
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@195 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@195 -- # jq -r '.[0].namespaces[0].name'
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@195 -- # ns_bdev=2b72348e-c667-431b-9f0f-da789d1bc8590n1
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@196 -- # jq -r '.[0].product_name'
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@196 -- # rpc_cmd bdev_get_bdevs -b 2b72348e-c667-431b-9f0f-da789d1bc8590n1
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@196 -- # [[ NVMe disk == \N\V\M\e\ \d\i\s\k ]]
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@197 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@197 -- # rpc_cmd bdev_get_bdevs
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@197 -- # [[ 0 -eq 0 ]]
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@198 -- # jq -r '.[0].namespaces[0].uuid'
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@198 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@198 -- # [[ 45ee581a-a051-4362-8a05-abdbcdc6e348 == \4\5\e\e\5\8\1\a\-\a\0\5\1\-\4\3\6\2\-\8\a\0\5\-\a\b\d\b\c\d\c\6\e\3\4\8 ]]
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@199 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@199 -- # jq -r '.[0].namespaces[0].nguid'
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:47.484    11:08:04 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@199 -- # uuid2nguid 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:47.484    11:08:04 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=45EE581A-A051-4362-8A05-ABDBCDC6E348
00:17:47.484    11:08:04 sma.sma_crypto -- sma/common.sh@41 -- # echo 45EE581AA05143628A05ABDBCDC6E348
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@199 -- # [[ 45EE581AA05143628A05ABDBCDC6E348 == \4\5\E\E\5\8\1\A\A\0\5\1\4\3\6\2\8\A\0\5\A\B\D\B\C\D\C\6\E\3\4\8 ]]
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@201 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:47.484   11:08:04 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:47.484    11:08:04 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:47.484    11:08:04 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:47.742  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:47.742  I0000 00:00:1733738884.650179  248722 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:47.742  I0000 00:00:1733738884.651874  248722 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:47.742  I0000 00:00:1733738884.653207  248728 subchannel.cc:806] subchannel 0x55897f67eb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55897f669840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55897f783380, grpc.internal.client_channel_call_destination=0x7f305ef2a390, grpc.internal.event_engine=0x55897f59aca0, grpc.internal.security_connector=0x55897f681850, grpc.internal.subchannel_pool=0x55897f6816b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55897f4c8770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:04.652724556+01:00"}), backing off for 999 ms
00:17:47.742  {}
00:17:47.742   11:08:04 sma.sma_crypto -- sma/crypto.sh@204 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:47.742   11:08:04 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:47.742   11:08:04 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:47.742   11:08:04 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:47.742    11:08:04 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:47.742    11:08:04 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:47.742    11:08:04 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:47.742     11:08:04 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:47.742      11:08:04 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:47.742      11:08:04 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:48.001  "nvmf": {
00:17:48.001    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:48.001    "discovery": {
00:17:48.001      "discovery_endpoints": [
00:17:48.001        {
00:17:48.001          "trtype": "tcp",
00:17:48.001          "traddr": "127.0.0.1",
00:17:48.001          "trsvcid": "8009"
00:17:48.001        }
00:17:48.001      ]
00:17:48.001    }
00:17:48.001  }'
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:48.001     11:08:04 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:48.001     11:08:04 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:48.001     11:08:04 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:48.001     11:08:04 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:48.001     11:08:04 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:48.001      11:08:04 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:48.001     11:08:04 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:48.001    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:48.001  }'
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:48.001    11:08:04 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:48.001  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:48.001  I0000 00:00:1733738884.980295  248748 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:48.001  I0000 00:00:1733738884.981967  248748 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:48.001  I0000 00:00:1733738884.983315  248767 subchannel.cc:806] subchannel 0x55ac6c07cb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ac6c067840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ac6c181380, grpc.internal.client_channel_call_destination=0x7fae67f7d390, grpc.internal.event_engine=0x55ac6bf98ca0, grpc.internal.security_connector=0x55ac6c07f850, grpc.internal.subchannel_pool=0x55ac6c07f6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ac6bec6770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:04.982873936+01:00"}), backing off for 999 ms
00:17:49.375  {}
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@206 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@206 -- # jq -r '. | length'
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@206 -- # [[ 1 -eq 1 ]]
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@207 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@207 -- # jq -r '.[0].namespaces | length'
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@207 -- # [[ 1 -eq 1 ]]
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@209 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=45ee581a-a051-4362-8a05-abdbcdc6e348 ns ns_bdev
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:49.375    "nsid": 1,
00:17:49.375    "bdev_name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:49.375    "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:49.375    "nguid": "45EE581AA05143628A05ABDBCDC6E348",
00:17:49.375    "uuid": "45ee581a-a051-4362-8a05-abdbcdc6e348"
00:17:49.375  }'
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=9d37dad7-0d1a-457b-99c3-fe21ce1be73c
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 9d37dad7-0d1a-457b-99c3-fe21ce1be73c
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.375   11:08:06 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:49.375    11:08:06 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.375    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 45ee581a-a051-4362-8a05-abdbcdc6e348 == \4\5\e\e\5\8\1\a\-\a\0\5\1\-\4\3\6\2\-\8\a\0\5\-\a\b\d\b\c\d\c\6\e\3\4\8 ]]
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:49.634    11:08:06 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=45EE581A-A051-4362-8A05-ABDBCDC6E348
00:17:49.634    11:08:06 sma.sma_crypto -- sma/common.sh@41 -- # echo 45EE581AA05143628A05ABDBCDC6E348
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 45EE581AA05143628A05ABDBCDC6E348 == \4\5\E\E\5\8\1\A\A\0\5\1\4\3\6\2\8\A\0\5\A\B\D\B\C\D\C\6\E\3\4\8 ]]
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@211 -- # rpc_cmd bdev_get_bdevs
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@211 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@211 -- # crypto_bdev='{
00:17:49.634    "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:49.634    "aliases": [
00:17:49.634      "d911ddb9-8f5a-59b0-9ca3-8de1eefbc6b3"
00:17:49.634    ],
00:17:49.634    "product_name": "crypto",
00:17:49.634    "block_size": 4096,
00:17:49.634    "num_blocks": 8192,
00:17:49.634    "uuid": "d911ddb9-8f5a-59b0-9ca3-8de1eefbc6b3",
00:17:49.634    "assigned_rate_limits": {
00:17:49.634      "rw_ios_per_sec": 0,
00:17:49.634      "rw_mbytes_per_sec": 0,
00:17:49.634      "r_mbytes_per_sec": 0,
00:17:49.634      "w_mbytes_per_sec": 0
00:17:49.634    },
00:17:49.634    "claimed": true,
00:17:49.634    "claim_type": "exclusive_write",
00:17:49.634    "zoned": false,
00:17:49.634    "supported_io_types": {
00:17:49.634      "read": true,
00:17:49.634      "write": true,
00:17:49.634      "unmap": true,
00:17:49.634      "flush": true,
00:17:49.634      "reset": true,
00:17:49.634      "nvme_admin": false,
00:17:49.634      "nvme_io": false,
00:17:49.634      "nvme_io_md": false,
00:17:49.634      "write_zeroes": true,
00:17:49.634      "zcopy": false,
00:17:49.634      "get_zone_info": false,
00:17:49.634      "zone_management": false,
00:17:49.634      "zone_append": false,
00:17:49.634      "compare": false,
00:17:49.634      "compare_and_write": false,
00:17:49.634      "abort": false,
00:17:49.634      "seek_hole": false,
00:17:49.634      "seek_data": false,
00:17:49.634      "copy": false,
00:17:49.634      "nvme_iov_md": false
00:17:49.634    },
00:17:49.634    "memory_domains": [
00:17:49.634      {
00:17:49.634        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:49.634        "dma_device_type": 2
00:17:49.634      }
00:17:49.634    ],
00:17:49.634    "driver_specific": {
00:17:49.634      "crypto": {
00:17:49.634        "base_bdev_name": "d0832b18-8c62-425f-820f-63ac2d8670790n1",
00:17:49.634        "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:49.634        "key_name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC"
00:17:49.634      }
00:17:49.634    }
00:17:49.634  }'
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@212 -- # jq -r .driver_specific.crypto.key_name
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@212 -- # key_name=9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@213 -- # rpc_cmd accel_crypto_keys_get -k 9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:49.634    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@213 -- # key_obj='[
00:17:49.634  {
00:17:49.634  "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC",
00:17:49.634  "cipher": "AES_CBC",
00:17:49.634  "key": "1234567890abcdef1234567890abcdef"
00:17:49.634  }
00:17:49.634  ]'
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@214 -- # jq -r '.[0].key'
00:17:49.634   11:08:06 sma.sma_crypto -- sma/crypto.sh@214 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:49.634    11:08:06 sma.sma_crypto -- sma/crypto.sh@215 -- # jq -r '.[0].cipher'
00:17:49.892   11:08:06 sma.sma_crypto -- sma/crypto.sh@215 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:49.892   11:08:06 sma.sma_crypto -- sma/crypto.sh@218 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:49.892   11:08:06 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:49.892   11:08:06 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:49.892   11:08:06 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:49.892    11:08:06 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:49.892    11:08:06 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:49.893     11:08:06 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:49.893      11:08:06 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:49.893      11:08:06 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:49.893  "nvmf": {
00:17:49.893    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:49.893    "discovery": {
00:17:49.893      "discovery_endpoints": [
00:17:49.893        {
00:17:49.893          "trtype": "tcp",
00:17:49.893          "traddr": "127.0.0.1",
00:17:49.893          "trsvcid": "8009"
00:17:49.893        }
00:17:49.893      ]
00:17:49.893    }
00:17:49.893  }'
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:49.893     11:08:06 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:49.893     11:08:06 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:49.893     11:08:06 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:49.893     11:08:06 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:49.893     11:08:06 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:49.893      11:08:06 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:49.893     11:08:06 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:49.893    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:49.893  }'
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:49.893    11:08:06 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:49.893  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:49.893  I0000 00:00:1733738886.891594  249222 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:49.893  I0000 00:00:1733738886.893188  249222 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:49.893  I0000 00:00:1733738886.895038  249241 subchannel.cc:806] subchannel 0x55cfcba0db20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cfcb9f8840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cfcbb12380, grpc.internal.client_channel_call_destination=0x7f0f8263d390, grpc.internal.event_engine=0x55cfcb929ca0, grpc.internal.security_connector=0x55cfcba10850, grpc.internal.subchannel_pool=0x55cfcba106b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cfcb857770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:06.894264869+01:00"}), backing off for 1000 ms
00:17:50.152  {}
00:17:50.152    11:08:06 sma.sma_crypto -- sma/crypto.sh@221 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:50.152    11:08:06 sma.sma_crypto -- sma/crypto.sh@221 -- # jq -r '. | length'
00:17:50.152    11:08:06 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.152    11:08:06 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.152    11:08:06 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.152   11:08:06 sma.sma_crypto -- sma/crypto.sh@221 -- # [[ 1 -eq 1 ]]
00:17:50.152    11:08:06 sma.sma_crypto -- sma/crypto.sh@222 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:50.152    11:08:06 sma.sma_crypto -- sma/crypto.sh@222 -- # jq -r '.[0].namespaces | length'
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.152   11:08:07 sma.sma_crypto -- sma/crypto.sh@222 -- # [[ 1 -eq 1 ]]
00:17:50.152   11:08:07 sma.sma_crypto -- sma/crypto.sh@223 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:50.152   11:08:07 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=45ee581a-a051-4362-8a05-abdbcdc6e348 ns ns_bdev
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.152   11:08:07 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:50.152    "nsid": 1,
00:17:50.152    "bdev_name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:50.152    "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:50.152    "nguid": "45EE581AA05143628A05ABDBCDC6E348",
00:17:50.152    "uuid": "45ee581a-a051-4362-8a05-abdbcdc6e348"
00:17:50.152  }'
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:50.152   11:08:07 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=9d37dad7-0d1a-457b-99c3-fe21ce1be73c
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 9d37dad7-0d1a-457b-99c3-fe21ce1be73c
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.152   11:08:07 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.152    11:08:07 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:50.152    11:08:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.411    11:08:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.411   11:08:07 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:50.411   11:08:07 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 45ee581a-a051-4362-8a05-abdbcdc6e348 == \4\5\e\e\5\8\1\a\-\a\0\5\1\-\4\3\6\2\-\8\a\0\5\-\a\b\d\b\c\d\c\6\e\3\4\8 ]]
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:50.411    11:08:07 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=45EE581A-A051-4362-8A05-ABDBCDC6E348
00:17:50.411    11:08:07 sma.sma_crypto -- sma/common.sh@41 -- # echo 45EE581AA05143628A05ABDBCDC6E348
00:17:50.411   11:08:07 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 45EE581AA05143628A05ABDBCDC6E348 == \4\5\E\E\5\8\1\A\A\0\5\1\4\3\6\2\8\A\0\5\A\B\D\B\C\D\C\6\E\3\4\8 ]]
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@224 -- # rpc_cmd bdev_get_bdevs
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@224 -- # jq -r '.[] | select(.product_name == "crypto")'
00:17:50.411    11:08:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.411    11:08:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.411    11:08:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.411   11:08:07 sma.sma_crypto -- sma/crypto.sh@224 -- # crypto_bdev2='{
00:17:50.411    "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:50.411    "aliases": [
00:17:50.411      "d911ddb9-8f5a-59b0-9ca3-8de1eefbc6b3"
00:17:50.411    ],
00:17:50.411    "product_name": "crypto",
00:17:50.411    "block_size": 4096,
00:17:50.411    "num_blocks": 8192,
00:17:50.411    "uuid": "d911ddb9-8f5a-59b0-9ca3-8de1eefbc6b3",
00:17:50.411    "assigned_rate_limits": {
00:17:50.411      "rw_ios_per_sec": 0,
00:17:50.411      "rw_mbytes_per_sec": 0,
00:17:50.411      "r_mbytes_per_sec": 0,
00:17:50.411      "w_mbytes_per_sec": 0
00:17:50.411    },
00:17:50.411    "claimed": true,
00:17:50.411    "claim_type": "exclusive_write",
00:17:50.411    "zoned": false,
00:17:50.411    "supported_io_types": {
00:17:50.411      "read": true,
00:17:50.411      "write": true,
00:17:50.411      "unmap": true,
00:17:50.411      "flush": true,
00:17:50.411      "reset": true,
00:17:50.411      "nvme_admin": false,
00:17:50.411      "nvme_io": false,
00:17:50.411      "nvme_io_md": false,
00:17:50.411      "write_zeroes": true,
00:17:50.411      "zcopy": false,
00:17:50.411      "get_zone_info": false,
00:17:50.411      "zone_management": false,
00:17:50.411      "zone_append": false,
00:17:50.411      "compare": false,
00:17:50.411      "compare_and_write": false,
00:17:50.411      "abort": false,
00:17:50.411      "seek_hole": false,
00:17:50.411      "seek_data": false,
00:17:50.411      "copy": false,
00:17:50.411      "nvme_iov_md": false
00:17:50.411    },
00:17:50.411    "memory_domains": [
00:17:50.411      {
00:17:50.411        "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:17:50.411        "dma_device_type": 2
00:17:50.411      }
00:17:50.411    ],
00:17:50.411    "driver_specific": {
00:17:50.411      "crypto": {
00:17:50.411        "base_bdev_name": "d0832b18-8c62-425f-820f-63ac2d8670790n1",
00:17:50.411        "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:50.411        "key_name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC"
00:17:50.411      }
00:17:50.411    }
00:17:50.411  }'
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name
00:17:50.411   11:08:07 sma.sma_crypto -- sma/crypto.sh@225 -- # [[ 9d37dad7-0d1a-457b-99c3-fe21ce1be73c == 9d37dad7-0d1a-457b-99c3-fe21ce1be73c ]]
00:17:50.411    11:08:07 sma.sma_crypto -- sma/crypto.sh@226 -- # jq -r .driver_specific.crypto.key_name
00:17:50.411   11:08:07 sma.sma_crypto -- sma/crypto.sh@226 -- # key_name=9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC
00:17:50.412    11:08:07 sma.sma_crypto -- sma/crypto.sh@227 -- # rpc_cmd accel_crypto_keys_get -k 9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC
00:17:50.412    11:08:07 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.412    11:08:07 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:50.412    11:08:07 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.412   11:08:07 sma.sma_crypto -- sma/crypto.sh@227 -- # key_obj='[
00:17:50.412  {
00:17:50.412  "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c_AES_CBC",
00:17:50.412  "cipher": "AES_CBC",
00:17:50.412  "key": "1234567890abcdef1234567890abcdef"
00:17:50.412  }
00:17:50.412  ]'
00:17:50.412    11:08:07 sma.sma_crypto -- sma/crypto.sh@228 -- # jq -r '.[0].key'
00:17:50.718   11:08:07 sma.sma_crypto -- sma/crypto.sh@228 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]]
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@229 -- # jq -r '.[0].cipher'
00:17:50.718   11:08:07 sma.sma_crypto -- sma/crypto.sh@229 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]]
00:17:50.718   11:08:07 sma.sma_crypto -- sma/crypto.sh@232 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_XTS 1234567890abcdef1234567890abcdef
00:17:50.718   11:08:07 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:50.718   11:08:07 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_XTS 1234567890abcdef1234567890abcdef
00:17:50.718   11:08:07 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:50.718   11:08:07 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:50.718    11:08:07 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:50.718   11:08:07 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:50.718   11:08:07 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_XTS 1234567890abcdef1234567890abcdef
00:17:50.718   11:08:07 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:50.718   11:08:07 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:50.718   11:08:07 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_XTS 1234567890abcdef1234567890abcdef
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_XTS key=1234567890abcdef1234567890abcdef key2= config
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:50.718     11:08:07 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:50.718      11:08:07 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:50.718      11:08:07 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:50.718  "nvmf": {
00:17:50.718    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:50.718    "discovery": {
00:17:50.718      "discovery_endpoints": [
00:17:50.718        {
00:17:50.718          "trtype": "tcp",
00:17:50.718          "traddr": "127.0.0.1",
00:17:50.718          "trsvcid": "8009"
00:17:50.718        }
00:17:50.718      ]
00:17:50.718    }
00:17:50.718  }'
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_XTS ]]
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:50.718     11:08:07 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_XTS
00:17:50.718     11:08:07 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:50.718     11:08:07 sma.sma_crypto -- sma/common.sh@29 -- # echo 1
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:50.718     11:08:07 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:50.718     11:08:07 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:50.718      11:08:07 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:50.718     11:08:07 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:50.718    "cipher": 1,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:50.718  }'
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:50.718    11:08:07 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:51.059  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.059  I0000 00:00:1733738887.736442  249304 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.059  I0000 00:00:1733738887.738123  249304 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.059  I0000 00:00:1733738887.739685  249513 subchannel.cc:806] subchannel 0x55b3ede70b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b3ede5b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b3edf75380, grpc.internal.client_channel_call_destination=0x7f212c58e390, grpc.internal.event_engine=0x55b3edd8cca0, grpc.internal.security_connector=0x55b3ede73850, grpc.internal.subchannel_pool=0x55b3ede736b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b3edcba770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:07.739180247+01:00"}), backing off for 1000 ms
00:17:51.059  Traceback (most recent call last):
00:17:51.059    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:51.059      main(sys.argv[1:])
00:17:51.059    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:51.059      result = client.call(request['method'], request.get('params', {}))
00:17:51.059               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.059    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:51.059      response = func(request=json_format.ParseDict(params, input()))
00:17:51.059                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.059    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:51.059      return _end_unary_response_blocking(state, call, False, None)
00:17:51.059             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.059    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:51.059      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:51.059      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.059  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:51.059  	status = StatusCode.INVALID_ARGUMENT
00:17:51.059  	details = "Invalid volume crypto configuration: bad cipher"
00:17:51.059  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:07.757070641+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:51.059  >
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:51.059   11:08:07 sma.sma_crypto -- sma/crypto.sh@234 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.059    11:08:07 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.059   11:08:07 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:51.059   11:08:07 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:51.059   11:08:07 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:51.059   11:08:07 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC deadbeefcafebabefeedbeefbabecafe
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_CBC key=deadbeefcafebabefeedbeefbabecafe key2= config
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:51.059     11:08:07 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:51.059      11:08:07 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.059      11:08:07 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:51.059  "nvmf": {
00:17:51.059    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:51.059    "discovery": {
00:17:51.059      "discovery_endpoints": [
00:17:51.059        {
00:17:51.059          "trtype": "tcp",
00:17:51.059          "traddr": "127.0.0.1",
00:17:51.059          "trsvcid": "8009"
00:17:51.059        }
00:17:51.059      ]
00:17:51.059    }
00:17:51.059  }'
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:51.059     11:08:07 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:51.059     11:08:07 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:51.059     11:08:07 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:51.059     11:08:07 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:17:51.059     11:08:07 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:51.059      11:08:07 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:51.059     11:08:07 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:51.059    "cipher": 0,"key": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:17:51.059  }'
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:51.059    11:08:07 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:51.059  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.060  I0000 00:00:1733738888.030072  249534 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.060  I0000 00:00:1733738888.031508  249534 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.060  I0000 00:00:1733738888.032941  249554 subchannel.cc:806] subchannel 0x55d36e755b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d36e740840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d36e85a380, grpc.internal.client_channel_call_destination=0x7fc70c2dd390, grpc.internal.event_engine=0x55d36e671ca0, grpc.internal.security_connector=0x55d36e758850, grpc.internal.subchannel_pool=0x55d36e7586b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d36e59f770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:08.032461127+01:00"}), backing off for 1000 ms
00:17:51.060  Traceback (most recent call last):
00:17:51.060    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:51.060      main(sys.argv[1:])
00:17:51.060    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:51.060      result = client.call(request['method'], request.get('params', {}))
00:17:51.060               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.060    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:51.060      response = func(request=json_format.ParseDict(params, input()))
00:17:51.060                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.060    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:51.060      return _end_unary_response_blocking(state, call, False, None)
00:17:51.060             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.060    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:51.060      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:51.060      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.060  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:51.060  	status = StatusCode.INVALID_ARGUMENT
00:17:51.060  	details = "Invalid volume crypto configuration: bad key"
00:17:51.060  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key", grpc_status:3, created_time:"2024-12-09T11:08:08.050270768+01:00"}"
00:17:51.060  >
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:51.324   11:08:08 sma.sma_crypto -- sma/crypto.sh@236 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.324    11:08:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.324   11:08:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:51.324   11:08:08 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:51.324   11:08:08 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:51.324   11:08:08 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2=deadbeefcafebabefeedbeefbabecafe config
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:51.324     11:08:08 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:51.324      11:08:08 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.324      11:08:08 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:51.324  "nvmf": {
00:17:51.324    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:51.324    "discovery": {
00:17:51.324      "discovery_endpoints": [
00:17:51.324        {
00:17:51.324          "trtype": "tcp",
00:17:51.324          "traddr": "127.0.0.1",
00:17:51.324          "trsvcid": "8009"
00:17:51.324        }
00:17:51.324      ]
00:17:51.324    }
00:17:51.324  }'
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:51.324     11:08:08 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:51.324     11:08:08 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:51.324     11:08:08 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:51.324     11:08:08 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:51.324     11:08:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:51.324      11:08:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n deadbeefcafebabefeedbeefbabecafe ]]
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@55 -- # crypto+=("\"key2\": \"$(format_key $key2)\"")
00:17:51.324     11:08:08 sma.sma_crypto -- sma/crypto.sh@55 -- # format_key deadbeefcafebabefeedbeefbabecafe
00:17:51.324     11:08:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:51.324      11:08:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe
00:17:51.324     11:08:08 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:51.324    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=","key2": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU="
00:17:51.324  }'
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:51.324    11:08:08 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:51.324  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.324  I0000 00:00:1733738888.319504  249577 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.324  I0000 00:00:1733738888.321197  249577 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.324  I0000 00:00:1733738888.322612  249593 subchannel.cc:806] subchannel 0x5633e2865b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5633e2850840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5633e296a380, grpc.internal.client_channel_call_destination=0x7f128f1e1390, grpc.internal.event_engine=0x5633e26ffcd0, grpc.internal.security_connector=0x5633e27ddec0, grpc.internal.subchannel_pool=0x5633e2867040, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5633e26af770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:08.322161894+01:00"}), backing off for 1000 ms
00:17:51.605  Traceback (most recent call last):
00:17:51.605    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:51.605      main(sys.argv[1:])
00:17:51.605    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:51.605      result = client.call(request['method'], request.get('params', {}))
00:17:51.605               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.605    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:51.605      response = func(request=json_format.ParseDict(params, input()))
00:17:51.605                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.605    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:51.605      return _end_unary_response_blocking(state, call, False, None)
00:17:51.605             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.605    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:51.605      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:51.605      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.605  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:51.605  	status = StatusCode.INVALID_ARGUMENT
00:17:51.605  	details = "Invalid volume crypto configuration: bad key2"
00:17:51.605  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key2", grpc_status:3, created_time:"2024-12-09T11:08:08.338758721+01:00"}"
00:17:51.605  >
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:51.605   11:08:08 sma.sma_crypto -- sma/crypto.sh@238 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.605    11:08:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:51.605   11:08:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:51.605   11:08:08 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:51.605   11:08:08 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:51.605   11:08:08 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:51.605     11:08:08 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:51.605      11:08:08 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.605      11:08:08 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:51.605  "nvmf": {
00:17:51.605    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:51.605    "discovery": {
00:17:51.605      "discovery_endpoints": [
00:17:51.605        {
00:17:51.605          "trtype": "tcp",
00:17:51.605          "traddr": "127.0.0.1",
00:17:51.605          "trsvcid": "8009"
00:17:51.605        }
00:17:51.605      ]
00:17:51.605    }
00:17:51.605  }'
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:51.605     11:08:08 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:51.605     11:08:08 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:51.605     11:08:08 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:51.605     11:08:08 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:51.605     11:08:08 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:51.605      11:08:08 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:51.605     11:08:08 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:51.605    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:51.605  }'
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:51.605    11:08:08 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:51.907  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:51.907  I0000 00:00:1733738888.604660  249615 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:51.907  I0000 00:00:1733738888.606242  249615 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:51.907  I0000 00:00:1733738888.607618  249629 subchannel.cc:806] subchannel 0x55e8c6f9eb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e8c6f89840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e8c70a3380, grpc.internal.client_channel_call_destination=0x7f7e483c4390, grpc.internal.event_engine=0x55e8c6ebaca0, grpc.internal.security_connector=0x55e8c6fa1850, grpc.internal.subchannel_pool=0x55e8c6fa16b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e8c6de8770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:08.607176652+01:00"}), backing off for 1000 ms
00:17:51.907  Traceback (most recent call last):
00:17:51.907    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:51.907      main(sys.argv[1:])
00:17:51.907    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:51.907      result = client.call(request['method'], request.get('params', {}))
00:17:51.907               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.907    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:51.907      response = func(request=json_format.ParseDict(params, input()))
00:17:51.907                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.907    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:51.907      return _end_unary_response_blocking(state, call, False, None)
00:17:51.907             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.907    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:51.907      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:51.907      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:51.907  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:51.907  	status = StatusCode.INVALID_ARGUMENT
00:17:51.907  	details = "Invalid volume crypto configuration: bad cipher"
00:17:51.907  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:08.623764491+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:51.907  >
00:17:51.907   11:08:08 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:51.907   11:08:08 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:51.907   11:08:08 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:51.907   11:08:08 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@241 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=45ee581a-a051-4362-8a05-abdbcdc6e348 ns ns_bdev
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:51.907    "nsid": 1,
00:17:51.907    "bdev_name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:51.907    "name": "9d37dad7-0d1a-457b-99c3-fe21ce1be73c",
00:17:51.907    "nguid": "45EE581AA05143628A05ABDBCDC6E348",
00:17:51.907    "uuid": "45ee581a-a051-4362-8a05-abdbcdc6e348"
00:17:51.907  }'
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=9d37dad7-0d1a-457b-99c3-fe21ce1be73c
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 9d37dad7-0d1a-457b-99c3-fe21ce1be73c
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:51.907    11:08:08 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 45ee581a-a051-4362-8a05-abdbcdc6e348 == \4\5\e\e\5\8\1\a\-\a\0\5\1\-\4\3\6\2\-\8\a\0\5\-\a\b\d\b\c\d\c\6\e\3\4\8 ]]
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.907    11:08:08 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=45EE581A-A051-4362-8A05-ABDBCDC6E348
00:17:51.907    11:08:08 sma.sma_crypto -- sma/common.sh@41 -- # echo 45EE581AA05143628A05ABDBCDC6E348
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 45EE581AA05143628A05ABDBCDC6E348 == \4\5\E\E\5\8\1\A\A\0\5\1\4\3\6\2\8\A\0\5\A\B\D\B\C\D\C\6\E\3\4\8 ]]
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@243 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.907   11:08:08 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:51.907    11:08:08 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:51.907    11:08:08 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:52.182  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:52.182  I0000 00:00:1733738889.115761  249865 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:52.182  I0000 00:00:1733738889.117336  249865 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:52.182  I0000 00:00:1733738889.118697  249876 subchannel.cc:806] subchannel 0x55e5e004ab20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e5e0035840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e5e014f380, grpc.internal.client_channel_call_destination=0x7f777b149390, grpc.internal.event_engine=0x55e5dff66ca0, grpc.internal.security_connector=0x55e5e004d850, grpc.internal.subchannel_pool=0x55e5e004d6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e5dfe94770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:09.118242219+01:00"}), backing off for 1000 ms
00:17:52.456  {}
00:17:52.456   11:08:09 sma.sma_crypto -- sma/crypto.sh@247 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:52.456   11:08:09 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:52.456   11:08:09 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:52.456   11:08:09 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:52.456   11:08:09 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.456    11:08:09 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:52.456   11:08:09 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:52.456   11:08:09 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:52.456   11:08:09 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:52.456   11:08:09 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:52.456   11:08:09 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:52.456     11:08:09 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:52.456      11:08:09 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:52.456      11:08:09 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:52.456  "nvmf": {
00:17:52.456    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:52.456    "discovery": {
00:17:52.456      "discovery_endpoints": [
00:17:52.456        {
00:17:52.456          "trtype": "tcp",
00:17:52.456          "traddr": "127.0.0.1",
00:17:52.456          "trsvcid": "8009"
00:17:52.456        }
00:17:52.456      ]
00:17:52.456    }
00:17:52.456  }'
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:52.456     11:08:09 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:52.456     11:08:09 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:52.456     11:08:09 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:52.456     11:08:09 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:52.456     11:08:09 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:52.456      11:08:09 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:52.456     11:08:09 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:52.456    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:52.456  }'
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:52.456    11:08:09 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:52.456  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:52.456  I0000 00:00:1733738889.454155  249899 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:52.456  I0000 00:00:1733738889.455665  249899 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:52.456  I0000 00:00:1733738889.457066  249913 subchannel.cc:806] subchannel 0x56511aeb1b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56511ae9c840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56511afb6380, grpc.internal.client_channel_call_destination=0x7fcdc5037390, grpc.internal.event_engine=0x56511adcdca0, grpc.internal.security_connector=0x56511aeb4850, grpc.internal.subchannel_pool=0x56511aeb46b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56511acfb770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:09.456580295+01:00"}), backing off for 1000 ms
00:17:53.883  Traceback (most recent call last):
00:17:53.883    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:53.883      main(sys.argv[1:])
00:17:53.883    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:53.883      result = client.call(request['method'], request.get('params', {}))
00:17:53.883               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.883    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:53.883      response = func(request=json_format.ParseDict(params, input()))
00:17:53.883                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.883    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:53.883      return _end_unary_response_blocking(state, call, False, None)
00:17:53.883             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.883    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:53.883      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:53.883      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:53.883  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:53.883  	status = StatusCode.INVALID_ARGUMENT
00:17:53.883  	details = "Invalid volume crypto configuration: bad cipher"
00:17:53.883  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:10.578148927+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:53.883  >
00:17:53.883   11:08:10 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:53.883   11:08:10 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:53.883   11:08:10 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:53.883   11:08:10 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:53.883    11:08:10 sma.sma_crypto -- sma/crypto.sh@248 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:53.883    11:08:10 sma.sma_crypto -- sma/crypto.sh@248 -- # jq -r '.[0].namespaces | length'
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:53.883   11:08:10 sma.sma_crypto -- sma/crypto.sh@248 -- # [[ 0 -eq 0 ]]
00:17:53.883    11:08:10 sma.sma_crypto -- sma/crypto.sh@249 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:53.883    11:08:10 sma.sma_crypto -- sma/crypto.sh@249 -- # jq -r '. | length'
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:53.883   11:08:10 sma.sma_crypto -- sma/crypto.sh@249 -- # [[ 0 -eq 0 ]]
00:17:53.883    11:08:10 sma.sma_crypto -- sma/crypto.sh@250 -- # rpc_cmd bdev_get_bdevs
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:53.883    11:08:10 sma.sma_crypto -- sma/crypto.sh@250 -- # jq -r length
00:17:53.883    11:08:10 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:53.883   11:08:10 sma.sma_crypto -- sma/crypto.sh@250 -- # [[ 0 -eq 0 ]]
00:17:53.884   11:08:10 sma.sma_crypto -- sma/crypto.sh@252 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:53.884   11:08:10 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.185  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:54.185  I0000 00:00:1733738890.986755  250156 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:54.185  I0000 00:00:1733738890.988822  250156 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:54.185  I0000 00:00:1733738890.992529  250162 subchannel.cc:806] subchannel 0x562ed1672b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562ed165d840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562ed1777380, grpc.internal.client_channel_call_destination=0x7f90ccc1c390, grpc.internal.event_engine=0x562ed158eca0, grpc.internal.security_connector=0x562ed167cdf0, grpc.internal.subchannel_pool=0x562ed16756b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562ed14bc770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:10.992053311+01:00"}), backing off for 1000 ms
00:17:54.185  {}
00:17:54.185    11:08:11 sma.sma_crypto -- sma/crypto.sh@255 -- # create_device 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:54.185    11:08:11 sma.sma_crypto -- sma/crypto.sh@255 -- # jq -r .handle
00:17:54.185    11:08:11 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:54.185     11:08:11 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:54.185     11:08:11 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:54.185     11:08:11 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:54.185      11:08:11 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:54.185       11:08:11 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:54.185       11:08:11 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:54.185     11:08:11 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:54.185  "nvmf": {
00:17:54.185    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:54.185    "discovery": {
00:17:54.185      "discovery_endpoints": [
00:17:54.185        {
00:17:54.185          "trtype": "tcp",
00:17:54.185          "traddr": "127.0.0.1",
00:17:54.185          "trsvcid": "8009"
00:17:54.185        }
00:17:54.185      ]
00:17:54.185    }
00:17:54.186  }'
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:54.186      11:08:11 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:54.186      11:08:11 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:54.186      11:08:11 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:54.186      11:08:11 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:54.186      11:08:11 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63
00:17:54.186       11:08:11 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:54.186      11:08:11 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:54.186    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:54.186  }'
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:54.186     11:08:11 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:54.466  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:54.466  I0000 00:00:1733738891.307193  250191 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:54.466  I0000 00:00:1733738891.309111  250191 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:54.466  I0000 00:00:1733738891.310658  250398 subchannel.cc:806] subchannel 0x558c5f145b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558c5f130840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558c5f24a380, grpc.internal.client_channel_call_destination=0x7f1c11b16390, grpc.internal.event_engine=0x558c5f1d0bb0, grpc.internal.security_connector=0x558c5f0bde30, grpc.internal.subchannel_pool=0x558c5f147060, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558c5ef8f770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:11.310203691+01:00"}), backing off for 1000 ms
00:17:55.499  [2024-12-09 11:08:12.429846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:55.499   11:08:12 sma.sma_crypto -- sma/crypto.sh@255 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:55.499   11:08:12 sma.sma_crypto -- sma/crypto.sh@256 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:55.499   11:08:12 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=45ee581a-a051-4362-8a05-abdbcdc6e348 ns ns_bdev
00:17:55.499    11:08:12 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0
00:17:55.499    11:08:12 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]'
00:17:55.499    11:08:12 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:55.499    11:08:12 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{
00:17:55.786    "nsid": 1,
00:17:55.786    "bdev_name": "72d48fdf-7bbb-486f-bdd8-6781ed004ba3",
00:17:55.786    "name": "72d48fdf-7bbb-486f-bdd8-6781ed004ba3",
00:17:55.786    "nguid": "45EE581AA05143628A05ABDBCDC6E348",
00:17:55.786    "uuid": "45ee581a-a051-4362-8a05-abdbcdc6e348"
00:17:55.786  }'
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=72d48fdf-7bbb-486f-bdd8-6781ed004ba3
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b 72d48fdf-7bbb-486f-bdd8-6781ed004ba3
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name'
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]]
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length'
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:55.786    11:08:12 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]]
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 45ee581a-a051-4362-8a05-abdbcdc6e348 == \4\5\e\e\5\8\1\a\-\a\0\5\1\-\4\3\6\2\-\8\a\0\5\-\a\b\d\b\c\d\c\6\e\3\4\8 ]]
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:55.786    11:08:12 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=45EE581A-A051-4362-8A05-ABDBCDC6E348
00:17:55.786    11:08:12 sma.sma_crypto -- sma/common.sh@41 -- # echo 45EE581AA05143628A05ABDBCDC6E348
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 45EE581AA05143628A05ABDBCDC6E348 == \4\5\E\E\5\8\1\A\A\0\5\1\4\3\6\2\8\A\0\5\A\B\D\B\C\D\C\6\E\3\4\8 ]]
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:55.786   11:08:12 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:55.786    11:08:12 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:55.786    11:08:12 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:56.055  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:56.055  I0000 00:00:1733738892.960898  250650 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:56.055  I0000 00:00:1733738892.962534  250650 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:56.055  I0000 00:00:1733738892.963928  250657 subchannel.cc:806] subchannel 0x55a71d570b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a71d55b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a71d675380, grpc.internal.client_channel_call_destination=0x7f337ca0b390, grpc.internal.event_engine=0x55a71d48cca0, grpc.internal.security_connector=0x55a71d573850, grpc.internal.subchannel_pool=0x55a71d5736b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a71d3ba770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:12.963447542+01:00"}), backing off for 1000 ms
00:17:56.055  {}
00:17:56.055   11:08:13 sma.sma_crypto -- sma/crypto.sh@259 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:56.055   11:08:13 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:56.331  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:56.331  I0000 00:00:1733738893.245653  250681 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:56.331  I0000 00:00:1733738893.250468  250681 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:56.331  I0000 00:00:1733738893.251729  250686 subchannel.cc:806] subchannel 0x563ae7d7db20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563ae7d68840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563ae7e82380, grpc.internal.client_channel_call_destination=0x7fc673ea4390, grpc.internal.event_engine=0x563ae7c99ca0, grpc.internal.security_connector=0x563ae7d87df0, grpc.internal.subchannel_pool=0x563ae7d806b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563ae7bc7770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:13.251317748+01:00"}), backing off for 1000 ms
00:17:56.331  {}
00:17:56.331   11:08:13 sma.sma_crypto -- sma/crypto.sh@263 -- # NOT create_device 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:56.331   11:08:13 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:56.331   11:08:13 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg create_device 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:56.331   11:08:13 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=create_device
00:17:56.331   11:08:13 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:56.331    11:08:13 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t create_device
00:17:56.331   11:08:13 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:56.331   11:08:13 sma.sma_crypto -- common/autotest_common.sh@655 -- # create_device 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:56.331   11:08:13 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 8 1234567890abcdef1234567890abcdef
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=8 key=1234567890abcdef1234567890abcdef key2= config
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:56.331     11:08:13 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:56.331      11:08:13 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:56.331      11:08:13 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:56.331  "nvmf": {
00:17:56.331    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:56.331    "discovery": {
00:17:56.331      "discovery_endpoints": [
00:17:56.331        {
00:17:56.331          "trtype": "tcp",
00:17:56.331          "traddr": "127.0.0.1",
00:17:56.331          "trsvcid": "8009"
00:17:56.331        }
00:17:56.331      ]
00:17:56.331    }
00:17:56.331  }'
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]]
00:17:56.331    11:08:13 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:56.331     11:08:13 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8
00:17:56.331     11:08:13 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:56.331     11:08:13 sma.sma_crypto -- sma/common.sh@30 -- # echo 8
00:17:56.610    11:08:13 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:56.610     11:08:13 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:56.610     11:08:13 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:56.610      11:08:13 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:56.610    11:08:13 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:56.610     11:08:13 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:56.610    11:08:13 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:56.610    "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:56.610  }'
00:17:56.610    11:08:13 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:56.610    11:08:13 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:56.610  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:56.610  I0000 00:00:1733738893.572046  250707 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:56.610  I0000 00:00:1733738893.573910  250707 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:56.610  I0000 00:00:1733738893.575315  250823 subchannel.cc:806] subchannel 0x5647a5420b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5647a540b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5647a5525380, grpc.internal.client_channel_call_destination=0x7fb55c2fc390, grpc.internal.event_engine=0x5647a54abbb0, grpc.internal.security_connector=0x5647a5398e30, grpc.internal.subchannel_pool=0x5647a5422060, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5647a526a770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:13.574898642+01:00"}), backing off for 999 ms
00:17:58.078  Traceback (most recent call last):
00:17:58.078    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:17:58.078      main(sys.argv[1:])
00:17:58.078    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:17:58.078      result = client.call(request['method'], request.get('params', {}))
00:17:58.078               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:58.078    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:17:58.078      response = func(request=json_format.ParseDict(params, input()))
00:17:58.078                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:58.078    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:17:58.078      return _end_unary_response_blocking(state, call, False, None)
00:17:58.078             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:58.078    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:17:58.078      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:17:58.078      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:17:58.078  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:17:58.078  	status = StatusCode.INVALID_ARGUMENT
00:17:58.078  	details = "Invalid volume crypto configuration: bad cipher"
00:17:58.078  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:14.688243868+01:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}"
00:17:58.078  >
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@264 -- # rpc_cmd bdev_nvme_get_discovery_info
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@264 -- # jq -r '. | length'
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@264 -- # [[ 0 -eq 0 ]]
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@265 -- # rpc_cmd bdev_get_bdevs
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@265 -- # jq -r length
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@265 -- # [[ 0 -eq 0 ]]
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@266 -- # rpc_cmd nvmf_get_subsystems
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@266 -- # jq -r '[.[] | select(.nqn == "nqn.2016-06.io.spdk:cnode0")] | length'
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@266 -- # [[ 0 -eq 0 ]]
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@269 -- # killprocess 248005
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 248005 ']'
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 248005
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:58.078    11:08:14 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248005
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248005'
00:17:58.078  killing process with pid 248005
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 248005
00:17:58.078   11:08:14 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 248005
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@278 -- # smapid=251161
00:17:58.078    11:08:14 sma.sma_crypto -- sma/crypto.sh@270 -- # cat
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@280 -- # sma_waitforlisten
00:17:58.078   11:08:14 sma.sma_crypto -- sma/crypto.sh@270 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:17:58.078   11:08:14 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:17:58.078   11:08:14 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080
00:17:58.078   11:08:14 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 ))
00:17:58.078   11:08:14 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:58.078   11:08:14 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:58.078   11:08:14 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s
00:17:58.379  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:58.379  I0000 00:00:1733738895.167101  251161 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:58.985   11:08:15 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ ))
00:17:58.985   11:08:15 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 ))
00:17:58.985   11:08:15 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:17:58.985   11:08:15 sma.sma_crypto -- sma/common.sh@12 -- # return 0
00:17:59.258    11:08:15 sma.sma_crypto -- sma/crypto.sh@281 -- # create_device
00:17:59.258    11:08:15 sma.sma_crypto -- sma/crypto.sh@281 -- # jq -r .handle
00:17:59.258    11:08:15 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:59.258  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:59.258  I0000 00:00:1733738896.194531  251358 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:59.258  I0000 00:00:1733738896.196066  251358 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:59.258  I0000 00:00:1733738896.197277  251416 subchannel.cc:806] subchannel 0x55a3bd489b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a3bd474840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a3bd58e380, grpc.internal.client_channel_call_destination=0x7f3b951b8390, grpc.internal.event_engine=0x55a3bd3a5ca0, grpc.internal.security_connector=0x55a3bd48c850, grpc.internal.subchannel_pool=0x55a3bd48c6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a3bd2d3770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:16.196894131+01:00"}), backing off for 999 ms
00:17:59.258  [2024-12-09 11:08:16.216503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:17:59.258   11:08:16 sma.sma_crypto -- sma/crypto.sh@281 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:59.258   11:08:16 sma.sma_crypto -- sma/crypto.sh@283 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:59.258   11:08:16 sma.sma_crypto -- common/autotest_common.sh@652 -- # local es=0
00:17:59.258   11:08:16 sma.sma_crypto -- common/autotest_common.sh@654 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:59.258   11:08:16 sma.sma_crypto -- common/autotest_common.sh@640 -- # local arg=attach_volume
00:17:59.258   11:08:16 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:59.258    11:08:16 sma.sma_crypto -- common/autotest_common.sh@644 -- # type -t attach_volume
00:17:59.258   11:08:16 sma.sma_crypto -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:17:59.258   11:08:16 sma.sma_crypto -- common/autotest_common.sh@655 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:59.258   11:08:16 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:17:59.258   11:08:16 sma.sma_crypto -- sma/crypto.sh@106 -- # shift
00:17:59.258   11:08:16 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:17:59.258    11:08:16 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 45ee581a-a051-4362-8a05-abdbcdc6e348 AES_CBC 1234567890abcdef1234567890abcdef
00:17:59.258    11:08:16 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=45ee581a-a051-4362-8a05-abdbcdc6e348 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config
00:17:59.258    11:08:16 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto
00:17:59.258     11:08:16 sma.sma_crypto -- sma/crypto.sh@47 -- # cat
00:17:59.258      11:08:16 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 45ee581a-a051-4362-8a05-abdbcdc6e348
00:17:59.258      11:08:16 sma.sma_crypto -- sma/common.sh@20 -- # python
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "Re5YGqBRQ2KKBavbzcbjSA==",
00:17:59.533  "nvmf": {
00:17:59.533    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:17:59.533    "discovery": {
00:17:59.533      "discovery_endpoints": [
00:17:59.533        {
00:17:59.533          "trtype": "tcp",
00:17:59.533          "traddr": "127.0.0.1",
00:17:59.533          "trsvcid": "8009"
00:17:59.533        }
00:17:59.533      ]
00:17:59.533    }
00:17:59.533  }'
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config")
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=,
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]]
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)")
00:17:59.533     11:08:16 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC
00:17:59.533     11:08:16 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in
00:17:59.533     11:08:16 sma.sma_crypto -- sma/common.sh@28 -- # echo 0
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"")
00:17:59.533     11:08:16 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef
00:17:59.533     11:08:16 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62
00:17:59.533      11:08:16 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]]
00:17:59.533     11:08:16 sma.sma_crypto -- sma/crypto.sh@64 -- # cat
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": {
00:17:59.533    "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY="
00:17:59.533  }'
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config")
00:17:59.533    11:08:16 sma.sma_crypto -- sma/crypto.sh@69 -- # cat
00:17:59.533  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:17:59.533  I0000 00:00:1733738896.502889  251438 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:17:59.533  I0000 00:00:1733738896.504569  251438 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:17:59.533  I0000 00:00:1733738896.506031  251457 subchannel.cc:806] subchannel 0x56415d069b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56415d054840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56415d16e380, grpc.internal.client_channel_call_destination=0x7f0384b99390, grpc.internal.event_engine=0x56415cf85ca0, grpc.internal.security_connector=0x56415d06c850, grpc.internal.subchannel_pool=0x56415d06c6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56415ceb3770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:16.505540969+01:00"}), backing off for 1000 ms
00:18:01.014  Traceback (most recent call last):
00:18:01.014    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:01.014      main(sys.argv[1:])
00:18:01.014    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:01.014      result = client.call(request['method'], request.get('params', {}))
00:18:01.015               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:01.015    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:01.015      response = func(request=json_format.ParseDict(params, input()))
00:18:01.015                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:01.015    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:01.015      return _end_unary_response_blocking(state, call, False, None)
00:18:01.015             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:01.015    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:01.015      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:01.015      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:01.015  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:01.015  	status = StatusCode.INVALID_ARGUMENT
00:18:01.015  	details = "Crypto is disabled"
00:18:01.015  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Crypto is disabled", grpc_status:3, created_time:"2024-12-09T11:08:17.611863812+01:00"}"
00:18:01.015  >
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@655 -- # es=1
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:01.015    11:08:17 sma.sma_crypto -- sma/crypto.sh@284 -- # rpc_cmd bdev_nvme_get_discovery_info
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:01.015    11:08:17 sma.sma_crypto -- sma/crypto.sh@284 -- # jq -r '. | length'
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:01.015   11:08:17 sma.sma_crypto -- sma/crypto.sh@284 -- # [[ 0 -eq 0 ]]
00:18:01.015    11:08:17 sma.sma_crypto -- sma/crypto.sh@285 -- # rpc_cmd bdev_get_bdevs
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:18:01.015    11:08:17 sma.sma_crypto -- sma/crypto.sh@285 -- # jq -r length
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:01.015   11:08:17 sma.sma_crypto -- sma/crypto.sh@285 -- # [[ 0 -eq 0 ]]
00:18:01.015   11:08:17 sma.sma_crypto -- sma/crypto.sh@287 -- # cleanup
00:18:01.015   11:08:17 sma.sma_crypto -- sma/crypto.sh@22 -- # killprocess 251161
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 251161 ']'
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 251161
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251161
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=python3
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251161'
00:18:01.015  killing process with pid 251161
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 251161
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 251161
00:18:01.015   11:08:17 sma.sma_crypto -- sma/crypto.sh@23 -- # killprocess 247579
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 247579 ']'
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 247579
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:01.015    11:08:17 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 247579
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 247579'
00:18:01.015  killing process with pid 247579
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 247579
00:18:01.015   11:08:17 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 247579
00:18:03.113   11:08:19 sma.sma_crypto -- sma/crypto.sh@24 -- # killprocess 248004
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@954 -- # '[' -z 248004 ']'
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@958 -- # kill -0 248004
00:18:03.113    11:08:19 sma.sma_crypto -- common/autotest_common.sh@959 -- # uname
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:03.113    11:08:19 sma.sma_crypto -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 248004
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@972 -- # echo 'killing process with pid 248004'
00:18:03.113  killing process with pid 248004
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@973 -- # kill 248004
00:18:03.113   11:08:19 sma.sma_crypto -- common/autotest_common.sh@978 -- # wait 248004
00:18:05.113   11:08:21 sma.sma_crypto -- sma/crypto.sh@288 -- # trap - SIGINT SIGTERM EXIT
00:18:05.113  
00:18:05.113  real	0m23.399s
00:18:05.113  user	0m48.798s
00:18:05.113  sys	0m2.871s
00:18:05.113   11:08:21 sma.sma_crypto -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:05.113   11:08:21 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x
00:18:05.113  ************************************
00:18:05.113  END TEST sma_crypto
00:18:05.113  ************************************
00:18:05.113   11:08:21 sma -- sma/sma.sh@17 -- # run_test sma_qos /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:18:05.113   11:08:21 sma -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:05.113   11:08:21 sma -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:05.113   11:08:21 sma -- common/autotest_common.sh@10 -- # set +x
00:18:05.113  ************************************
00:18:05.113  START TEST sma_qos
00:18:05.113  ************************************
00:18:05.113   11:08:21 sma.sma_qos -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh
00:18:05.113  * Looking for test storage...
00:18:05.113  * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:05.113     11:08:21 sma.sma_qos -- common/autotest_common.sh@1711 -- # lcov --version
00:18:05.113     11:08:21 sma.sma_qos -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@336 -- # IFS=.-:
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@336 -- # read -ra ver1
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@337 -- # IFS=.-:
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@337 -- # read -ra ver2
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@338 -- # local 'op=<'
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@340 -- # ver1_l=2
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@341 -- # ver2_l=1
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@344 -- # case "$op" in
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@345 -- # : 1
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@365 -- # decimal 1
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@353 -- # local d=1
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@355 -- # echo 1
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@365 -- # ver1[v]=1
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@366 -- # decimal 2
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@353 -- # local d=2
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:05.113     11:08:21 sma.sma_qos -- scripts/common.sh@355 -- # echo 2
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@366 -- # ver2[v]=2
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:05.113    11:08:21 sma.sma_qos -- scripts/common.sh@368 -- # return 0
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:05.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:05.113  		--rc genhtml_branch_coverage=1
00:18:05.113  		--rc genhtml_function_coverage=1
00:18:05.113  		--rc genhtml_legend=1
00:18:05.113  		--rc geninfo_all_blocks=1
00:18:05.113  		--rc geninfo_unexecuted_blocks=1
00:18:05.113  		
00:18:05.113  		'
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:05.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:05.113  		--rc genhtml_branch_coverage=1
00:18:05.113  		--rc genhtml_function_coverage=1
00:18:05.113  		--rc genhtml_legend=1
00:18:05.113  		--rc geninfo_all_blocks=1
00:18:05.113  		--rc geninfo_unexecuted_blocks=1
00:18:05.113  		
00:18:05.113  		'
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:05.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:05.113  		--rc genhtml_branch_coverage=1
00:18:05.113  		--rc genhtml_function_coverage=1
00:18:05.113  		--rc genhtml_legend=1
00:18:05.113  		--rc geninfo_all_blocks=1
00:18:05.113  		--rc geninfo_unexecuted_blocks=1
00:18:05.113  		
00:18:05.113  		'
00:18:05.113    11:08:21 sma.sma_qos -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:05.113  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:05.113  		--rc genhtml_branch_coverage=1
00:18:05.113  		--rc genhtml_function_coverage=1
00:18:05.113  		--rc genhtml_legend=1
00:18:05.113  		--rc geninfo_all_blocks=1
00:18:05.113  		--rc geninfo_unexecuted_blocks=1
00:18:05.113  		
00:18:05.113  		'
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@13 -- # smac=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@15 -- # device_nvmf_tcp=3
00:18:05.113    11:08:21 sma.sma_qos -- sma/qos.sh@16 -- # printf %u -1
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@16 -- # limit_reserved=18446744073709551615
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@42 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@45 -- # tgtpid=252454
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@55 -- # smapid=252456
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@57 -- # sma_waitforlisten
00:18:05.113   11:08:21 sma.sma_qos -- sma/qos.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63
00:18:05.113   11:08:21 sma.sma_qos -- sma/common.sh@7 -- # local sma_addr=127.0.0.1
00:18:05.113   11:08:21 sma.sma_qos -- sma/common.sh@8 -- # local sma_port=8080
00:18:05.113   11:08:21 sma.sma_qos -- sma/common.sh@10 -- # (( i = 0 ))
00:18:05.113    11:08:21 sma.sma_qos -- sma/qos.sh@47 -- # cat
00:18:05.113   11:08:21 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:18:05.113   11:08:21 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:05.113   11:08:21 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:18:05.113  [2024-12-09 11:08:21.980909] Starting SPDK v25.01-pre git sha1 04ba75cf7 / DPDK 24.03.0 initialization...
00:18:05.113  [2024-12-09 11:08:21.981013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252454 ]
00:18:05.113  EAL: No free 2048 kB hugepages reported on node 1
00:18:05.113  [2024-12-09 11:08:22.116817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:05.371  [2024-12-09 11:08:22.233240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:05.937   11:08:22 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:18:05.937   11:08:22 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:18:05.937   11:08:22 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:05.937   11:08:22 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s
00:18:06.210  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:06.210  I0000 00:00:1733738903.112591  252456 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:06.210  [2024-12-09 11:08:23.124344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:07.148   11:08:23 sma.sma_qos -- sma/common.sh@10 -- # (( i++ ))
00:18:07.148   11:08:23 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 ))
00:18:07.148   11:08:23 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080
00:18:07.148   11:08:23 sma.sma_qos -- sma/common.sh@12 -- # return 0
00:18:07.148   11:08:23 sma.sma_qos -- sma/qos.sh@60 -- # rpc_cmd bdev_null_create null0 100 4096
00:18:07.148   11:08:23 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:07.148   11:08:23 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:07.148  null0
00:18:07.148   11:08:23 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:07.148    11:08:23 sma.sma_qos -- sma/qos.sh@61 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:07.148    11:08:23 sma.sma_qos -- sma/qos.sh@61 -- # jq -r '.[].uuid'
00:18:07.148    11:08:23 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:07.148    11:08:23 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:07.148    11:08:23 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:07.148   11:08:24 sma.sma_qos -- sma/qos.sh@61 -- # uuid=b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:07.148    11:08:24 sma.sma_qos -- sma/qos.sh@62 -- # create_device b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:07.148    11:08:24 sma.sma_qos -- sma/qos.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:07.148    11:08:24 sma.sma_qos -- sma/qos.sh@62 -- # jq -r .handle
00:18:07.148     11:08:24 sma.sma_qos -- sma/qos.sh@24 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:07.148     11:08:24 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:07.407  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:07.407  I0000 00:00:1733738904.351266  252862 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:07.407  I0000 00:00:1733738904.353343  252862 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:07.407  I0000 00:00:1733738904.354987  253045 subchannel.cc:806] subchannel 0x56401f4b3b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56401f49e840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56401f5b8380, grpc.internal.client_channel_call_destination=0x7f4b26df2390, grpc.internal.event_engine=0x56401f3cfca0, grpc.internal.security_connector=0x56401f4b6850, grpc.internal.subchannel_pool=0x56401f4b66b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56401f2fd770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:24.35443465+01:00"}), backing off for 1000 ms
00:18:07.407  [2024-12-09 11:08:24.384069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:18:07.665   11:08:24 sma.sma_qos -- sma/qos.sh@62 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0
00:18:07.665   11:08:24 sma.sma_qos -- sma/qos.sh@65 -- # diff /dev/fd/62 /dev/fd/61
00:18:07.665    11:08:24 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:18:07.665    11:08:24 sma.sma_qos -- sma/qos.sh@65 -- # get_qos_caps 3
00:18:07.665    11:08:24 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys
00:18:07.665    11:08:24 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:18:07.665     11:08:24 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:18:07.665    11:08:24 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:18:07.665    11:08:24 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:18:07.665  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:07.665  I0000 00:00:1733738904.620048  253092 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:07.665  I0000 00:00:1733738904.621684  253092 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:07.665  I0000 00:00:1733738904.622976  253095 subchannel.cc:806] subchannel 0x5637a4a911a0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5637a48a2480, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5637a4a8a8b0, grpc.internal.client_channel_call_destination=0x7f3c63e89390, grpc.internal.event_engine=0x5637a4959480, grpc.internal.security_connector=0x5637a4a8a100, grpc.internal.subchannel_pool=0x5637a4966a00, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5637a4859320, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:24.622510508+01:00"}), backing off for 1000 ms
00:18:07.665   11:08:24 sma.sma_qos -- sma/qos.sh@79 -- # NOT get_qos_caps 1234
00:18:07.665   11:08:24 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:07.665   11:08:24 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg get_qos_caps 1234
00:18:07.665   11:08:24 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=get_qos_caps
00:18:07.665   11:08:24 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:07.665    11:08:24 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t get_qos_caps
00:18:07.665   11:08:24 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:07.665   11:08:24 sma.sma_qos -- common/autotest_common.sh@655 -- # get_qos_caps 1234
00:18:07.665   11:08:24 sma.sma_qos -- sma/common.sh@45 -- # local rootdir
00:18:07.665    11:08:24 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh
00:18:07.665   11:08:24 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../..
00:18:07.665   11:08:24 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py
00:18:07.924  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:07.924  I0000 00:00:1733738904.851584  253118 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:07.924  I0000 00:00:1733738904.853373  253118 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:07.924  I0000 00:00:1733738904.854507  253123 subchannel.cc:806] subchannel 0x563aee54f1a0 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563aee360480, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563aee5488b0, grpc.internal.client_channel_call_destination=0x7f14ea800390, grpc.internal.event_engine=0x563aee417480, grpc.internal.security_connector=0x563aee548100, grpc.internal.subchannel_pool=0x563aee424a00, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563aee317320, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:24.854087659+01:00"}), backing off for 1000 ms
00:18:07.924  Traceback (most recent call last):
00:18:07.924    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 74, in <module>
00:18:07.924      main(sys.argv[1:])
00:18:07.924    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 69, in main
00:18:07.924      result = client.call(request['method'], request.get('params', {}))
00:18:07.924               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:07.924    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 43, in call
00:18:07.924      response = func(request=json_format.ParseDict(params, input()))
00:18:07.924                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:07.924    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:07.924      return _end_unary_response_blocking(state, call, False, None)
00:18:07.924             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:07.924    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:07.924      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:07.924      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:07.924  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:07.924  	status = StatusCode.INVALID_ARGUMENT
00:18:07.924  	details = "Invalid device type"
00:18:07.924  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid device type", grpc_status:3, created_time:"2024-12-09T11:08:24.855349481+01:00"}"
00:18:07.924  >
00:18:07.924   11:08:24 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:07.924   11:08:24 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:07.924   11:08:24 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:07.924   11:08:24 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:07.924   11:08:24 sma.sma_qos -- sma/qos.sh@82 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:07.924    11:08:24 sma.sma_qos -- sma/qos.sh@82 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:07.924    11:08:24 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:08.183  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:08.183  I0000 00:00:1733738905.180937  253143 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:08.184  I0000 00:00:1733738905.182816  253143 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:08.184  I0000 00:00:1733738905.184131  253146 subchannel.cc:806] subchannel 0x56106e719b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56106e704840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56106e81e380, grpc.internal.client_channel_call_destination=0x7f84b8956390, grpc.internal.event_engine=0x56106e635ca0, grpc.internal.security_connector=0x56106e71c850, grpc.internal.subchannel_pool=0x56106e71c6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56106e563770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:25.18364861+01:00"}), backing off for 1000 ms
00:18:08.441  {}
00:18:08.441   11:08:25 sma.sma_qos -- sma/qos.sh@94 -- # diff /dev/fd/62 /dev/fd/61
00:18:08.441    11:08:25 sma.sma_qos -- sma/qos.sh@94 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:08.441    11:08:25 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys
00:18:08.441    11:08:25 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.441    11:08:25 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys '.[].assigned_rate_limits'
00:18:08.442    11:08:25 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:08.442    11:08:25 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.442   11:08:25 sma.sma_qos -- sma/qos.sh@106 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:08.442    11:08:25 sma.sma_qos -- sma/qos.sh@106 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:08.442    11:08:25 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:08.712  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:08.712  I0000 00:00:1733738905.545601  253178 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:08.712  I0000 00:00:1733738905.547350  253178 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:08.712  I0000 00:00:1733738905.548757  253363 subchannel.cc:806] subchannel 0x557674590b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55767457b840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557674695380, grpc.internal.client_channel_call_destination=0x7f243cb0d390, grpc.internal.event_engine=0x5576744acca0, grpc.internal.security_connector=0x557674593850, grpc.internal.subchannel_pool=0x5576745936b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5576743da770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:25.548205031+01:00"}), backing off for 1000 ms
00:18:08.712  {}
00:18:08.712   11:08:25 sma.sma_qos -- sma/qos.sh@119 -- # diff /dev/fd/62 /dev/fd/61
00:18:08.712    11:08:25 sma.sma_qos -- sma/qos.sh@119 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:08.712    11:08:25 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.712    11:08:25 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:08.712    11:08:25 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys
00:18:08.712    11:08:25 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys '.[].assigned_rate_limits'
00:18:08.712    11:08:25 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.712   11:08:25 sma.sma_qos -- sma/qos.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:08.712    11:08:25 sma.sma_qos -- sma/qos.sh@131 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:08.712    11:08:25 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:08.970  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:08.970  I0000 00:00:1733738905.873680  253399 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:08.970  I0000 00:00:1733738905.875404  253399 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:08.970  I0000 00:00:1733738905.876826  253409 subchannel.cc:806] subchannel 0x55b42287db20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b422868840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b422982380, grpc.internal.client_channel_call_destination=0x7f39e9df9390, grpc.internal.event_engine=0x55b422799ca0, grpc.internal.security_connector=0x55b422880850, grpc.internal.subchannel_pool=0x55b4228806b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b4226c7770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:25.876260055+01:00"}), backing off for 1000 ms
00:18:08.970  {}
00:18:08.971    11:08:25 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys
00:18:08.971   11:08:25 sma.sma_qos -- sma/qos.sh@145 -- # diff /dev/fd/62 /dev/fd/61
00:18:08.971    11:08:25 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys '.[].assigned_rate_limits'
00:18:08.971    11:08:25 sma.sma_qos -- sma/qos.sh@145 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:08.971    11:08:25 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:08.971    11:08:25 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:08.971    11:08:25 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:08.971   11:08:25 sma.sma_qos -- sma/qos.sh@157 -- # unsupported_max_limits=(rd_iops wr_iops)
00:18:08.971   11:08:25 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:18:08.971   11:08:25 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:08.971    11:08:25 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:08.971    11:08:25 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.243    11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.243    11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:18:09.243   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.243  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:09.243  I0000 00:00:1733738906.202937  253439 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:09.243  I0000 00:00:1733738906.204690  253439 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:09.243  I0000 00:00:1733738906.205932  253440 subchannel.cc:806] subchannel 0x5610093e6b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5610093d1840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5610094eb380, grpc.internal.client_channel_call_destination=0x7f83d2803390, grpc.internal.event_engine=0x561009302ca0, grpc.internal.security_connector=0x5610093e9850, grpc.internal.subchannel_pool=0x5610093e96b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561009230770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:26.205494409+01:00"}), backing off for 1000 ms
00:18:09.243  Traceback (most recent call last):
00:18:09.243    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:09.243      main(sys.argv[1:])
00:18:09.243    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:09.243      result = client.call(request['method'], request.get('params', {}))
00:18:09.243               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.243    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:09.243      response = func(request=json_format.ParseDict(params, input()))
00:18:09.243                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.243    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:09.243      return _end_unary_response_blocking(state, call, False, None)
00:18:09.243             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.243    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:09.243      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:09.243      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.243  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:09.243  	status = StatusCode.INVALID_ARGUMENT
00:18:09.243  	details = "Unsupported QoS limit: maximum.rd_iops"
00:18:09.243  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:26.221536519+01:00", grpc_status:3, grpc_message:"Unsupported QoS limit: maximum.rd_iops"}"
00:18:09.243  >
00:18:09.502   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:09.502   11:08:26 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:09.502   11:08:26 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:09.502   11:08:26 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:09.502   11:08:26 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}"
00:18:09.503   11:08:26 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503    11:08:26 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:09.503    11:08:26 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.503    11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.503    11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:18:09.503   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.503  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:09.503  I0000 00:00:1733738906.478779  253470 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:09.503  I0000 00:00:1733738906.480393  253470 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:09.503  I0000 00:00:1733738906.481576  253471 subchannel.cc:806] subchannel 0x557d29925b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557d29910840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557d29a2a380, grpc.internal.client_channel_call_destination=0x7f930299e390, grpc.internal.event_engine=0x557d29841ca0, grpc.internal.security_connector=0x557d29928850, grpc.internal.subchannel_pool=0x557d299286b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557d2976f770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:26.481172976+01:00"}), backing off for 1000 ms
00:18:09.503  Traceback (most recent call last):
00:18:09.503    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:09.503      main(sys.argv[1:])
00:18:09.503    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:09.503      result = client.call(request['method'], request.get('params', {}))
00:18:09.503               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.503    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:09.503      response = func(request=json_format.ParseDict(params, input()))
00:18:09.503                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.503    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:09.503      return _end_unary_response_blocking(state, call, False, None)
00:18:09.503             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.503    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:09.503      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:09.503      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:09.503  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:09.503  	status = StatusCode.INVALID_ARGUMENT
00:18:09.503  	details = "Unsupported QoS limit: maximum.wr_iops"
00:18:09.503  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:26.49829315+01:00", grpc_status:3, grpc_message:"Unsupported QoS limit: maximum.wr_iops"}"
00:18:09.503  >
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:09.762   11:08:26 sma.sma_qos -- sma/qos.sh@178 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762    11:08:26 sma.sma_qos -- sma/qos.sh@178 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:09.762    11:08:26 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.762    11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.762    11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:18:09.762   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:09.762  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:09.762  I0000 00:00:1733738906.759120  253499 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:09.762  I0000 00:00:1733738906.760727  253499 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:09.762  I0000 00:00:1733738906.761969  253691 subchannel.cc:806] subchannel 0x557d65208b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557d651f3840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557d6530d380, grpc.internal.client_channel_call_destination=0x7fd2f0e78390, grpc.internal.event_engine=0x557d65124ca0, grpc.internal.security_connector=0x557d6520b850, grpc.internal.subchannel_pool=0x557d6520b6b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x557d65052770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:26.761531393+01:00"}), backing off for 1000 ms
00:18:10.021  [2024-12-09 11:08:26.774460] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0-invalid' does not exist
00:18:10.021  Traceback (most recent call last):
00:18:10.021    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:10.021      main(sys.argv[1:])
00:18:10.021    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:10.021      result = client.call(request['method'], request.get('params', {}))
00:18:10.021               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.021    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:10.021      response = func(request=json_format.ParseDict(params, input()))
00:18:10.021                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.021    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:10.021      return _end_unary_response_blocking(state, call, False, None)
00:18:10.021             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.021    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:10.021      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:10.021      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.021  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:10.021  	status = StatusCode.NOT_FOUND
00:18:10.021  	details = "No device associated with device_handle could be found"
00:18:10.021  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"No device associated with device_handle could be found", grpc_status:5, created_time:"2024-12-09T11:08:26.778957471+01:00"}"
00:18:10.021  >
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:10.021   11:08:26 sma.sma_qos -- sma/qos.sh@191 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.021     11:08:26 sma.sma_qos -- sma/qos.sh@191 -- # uuidgen
00:18:10.021    11:08:26 sma.sma_qos -- sma/qos.sh@191 -- # uuid2base64 f1fe160b-6888-498e-9b4c-5e72d30648d8
00:18:10.021    11:08:26 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.021    11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.021    11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:18:10.021   11:08:26 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:10.280  I0000 00:00:1733738907.031627  253718 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:10.280  I0000 00:00:1733738907.033343  253718 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:10.280  I0000 00:00:1733738907.034706  253719 subchannel.cc:806] subchannel 0x561bd181eb20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561bd1809840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561bd1923380, grpc.internal.client_channel_call_destination=0x7f79751b2390, grpc.internal.event_engine=0x561bd173aca0, grpc.internal.security_connector=0x561bd1821850, grpc.internal.subchannel_pool=0x561bd18216b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561bd1668770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:27.03427842+01:00"}), backing off for 1000 ms
00:18:10.280  [2024-12-09 11:08:27.039235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: f1fe160b-6888-498e-9b4c-5e72d30648d8
00:18:10.280  Traceback (most recent call last):
00:18:10.280    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:10.280      main(sys.argv[1:])
00:18:10.280    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:10.280      result = client.call(request['method'], request.get('params', {}))
00:18:10.280               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:10.280      response = func(request=json_format.ParseDict(params, input()))
00:18:10.280                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:10.280      return _end_unary_response_blocking(state, call, False, None)
00:18:10.280             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:10.280      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:10.280      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:10.280  	status = StatusCode.NOT_FOUND
00:18:10.280  	details = "No volume associated with volume_id could be found"
00:18:10.280  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:27.04367839+01:00", grpc_status:5, grpc_message:"No volume associated with volume_id could be found"}"
00:18:10.280  >
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:10.280   11:08:27 sma.sma_qos -- sma/qos.sh@205 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.280    11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.280    11:08:27 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:18:10.280   11:08:27 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.280  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:10.280  I0000 00:00:1733738907.270759  253741 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:10.280  I0000 00:00:1733738907.272371  253741 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:10.280  I0000 00:00:1733738907.273614  253748 subchannel.cc:806] subchannel 0x5573b7c24b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5573b7c0f840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5573b7d29380, grpc.internal.client_channel_call_destination=0x7f4fe352b390, grpc.internal.event_engine=0x5573b7b40ca0, grpc.internal.security_connector=0x5573b7c27850, grpc.internal.subchannel_pool=0x5573b7c276b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5573b7a6e770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:27.273177592+01:00"}), backing off for 1000 ms
00:18:10.280  Traceback (most recent call last):
00:18:10.280    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:10.280      main(sys.argv[1:])
00:18:10.280    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:10.280      result = client.call(request['method'], request.get('params', {}))
00:18:10.280               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:10.280      response = func(request=json_format.ParseDict(params, input()))
00:18:10.280                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:10.280      return _end_unary_response_blocking(state, call, False, None)
00:18:10.280             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:10.280      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:10.280      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.280  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:10.280  	status = StatusCode.INVALID_ARGUMENT
00:18:10.280  	details = "Invalid volume ID"
00:18:10.280  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-12-09T11:08:27.274838692+01:00", grpc_status:3, grpc_message:"Invalid volume ID"}"
00:18:10.280  >
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:10.539   11:08:27 sma.sma_qos -- sma/qos.sh@217 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539    11:08:27 sma.sma_qos -- sma/qos.sh@217 -- # uuid2base64 b716f421-f7bd-4ce3-ab4c-22ebf6a4ba8f
00:18:10.539    11:08:27 sma.sma_qos -- sma/common.sh@20 -- # python
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@652 -- # local es=0
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.539    11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.539    11:08:27 sma.sma_qos -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]]
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py
00:18:10.539  WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
00:18:10.539  I0000 00:00:1733738907.519990  253772 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache
00:18:10.539  I0000 00:00:1733738907.521496  253772 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080'
00:18:10.539  I0000 00:00:1733738907.522753  253773 subchannel.cc:806] subchannel 0x55e067810b20 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e0677fb840, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e067915380, grpc.internal.client_channel_call_destination=0x7f99c08f1390, grpc.internal.event_engine=0x55e06772cca0, grpc.internal.security_connector=0x55e067813850, grpc.internal.subchannel_pool=0x55e0678136b0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e06765a770, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-12-09T11:08:27.522273699+01:00"}), backing off for 1000 ms
00:18:10.539  Traceback (most recent call last):
00:18:10.539    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in <module>
00:18:10.539      main(sys.argv[1:])
00:18:10.539    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main
00:18:10.539      result = client.call(request['method'], request.get('params', {}))
00:18:10.539               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.539    File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call
00:18:10.539      response = func(request=json_format.ParseDict(params, input()))
00:18:10.539                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.539    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__
00:18:10.539      return _end_unary_response_blocking(state, call, False, None)
00:18:10.539             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.539    File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking
00:18:10.539      raise _InactiveRpcError(state)  # pytype: disable=not-instantiable
00:18:10.539      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00:18:10.539  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
00:18:10.539  	status = StatusCode.NOT_FOUND
00:18:10.539  	details = "Invalid device handle"
00:18:10.539  	debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid device handle", grpc_status:5, created_time:"2024-12-09T11:08:27.523952938+01:00"}"
00:18:10.539  >
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@655 -- # es=1
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:10.539   11:08:27 sma.sma_qos -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:10.539    11:08:27 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys
00:18:10.539   11:08:27 sma.sma_qos -- sma/qos.sh@230 -- # diff /dev/fd/62 /dev/fd/61
00:18:10.798    11:08:27 sma.sma_qos -- sma/qos.sh@230 -- # rpc_cmd bdev_get_bdevs -b null0
00:18:10.798    11:08:27 sma.sma_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:10.798    11:08:27 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys '.[].assigned_rate_limits'
00:18:10.798    11:08:27 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:10.798    11:08:27 sma.sma_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:10.798   11:08:27 sma.sma_qos -- sma/qos.sh@241 -- # trap - SIGINT SIGTERM EXIT
00:18:10.798   11:08:27 sma.sma_qos -- sma/qos.sh@242 -- # cleanup
00:18:10.798   11:08:27 sma.sma_qos -- sma/qos.sh@19 -- # killprocess 252454
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 252454 ']'
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 252454
00:18:10.798    11:08:27 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:10.798    11:08:27 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252454
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252454'
00:18:10.798  killing process with pid 252454
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 252454
00:18:10.798   11:08:27 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 252454
00:18:12.704   11:08:29 sma.sma_qos -- sma/qos.sh@20 -- # killprocess 252456
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@954 -- # '[' -z 252456 ']'
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@958 -- # kill -0 252456
00:18:12.704    11:08:29 sma.sma_qos -- common/autotest_common.sh@959 -- # uname
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:12.704    11:08:29 sma.sma_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252456
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@960 -- # process_name=python3
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@964 -- # '[' python3 = sudo ']'
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252456'
00:18:12.704  killing process with pid 252456
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@973 -- # kill 252456
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@978 -- # wait 252456
00:18:12.704  
00:18:12.704  real	0m7.851s
00:18:12.704  user	0m10.733s
00:18:12.704  sys	0m1.219s
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:12.704   11:08:29 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x
00:18:12.704  ************************************
00:18:12.704  END TEST sma_qos
00:18:12.704  ************************************
00:18:12.704  
00:18:12.704  real	3m35.355s
00:18:12.704  user	6m15.329s
00:18:12.704  sys	0m21.649s
00:18:12.704   11:08:29 sma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:12.704   11:08:29 sma -- common/autotest_common.sh@10 -- # set +x
00:18:12.704  ************************************
00:18:12.704  END TEST sma
00:18:12.704  ************************************
00:18:12.704   11:08:29  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:18:12.704   11:08:29  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:18:12.704   11:08:29  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:18:12.704   11:08:29  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:18:12.704   11:08:29  -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:12.704   11:08:29  -- common/autotest_common.sh@10 -- # set +x
00:18:12.704   11:08:29  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:18:12.704   11:08:29  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:18:12.704   11:08:29  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:18:12.704   11:08:29  -- common/autotest_common.sh@10 -- # set +x
00:18:15.237  INFO: APP EXITING
00:18:15.237  INFO: killing all VMs
00:18:15.237  INFO: killing vhost app
00:18:15.237  INFO: EXIT DONE
00:18:16.172  0000:00:04.7 (8086 6f27): Already using the ioatdma driver
00:18:16.172  0000:00:04.6 (8086 6f26): Already using the ioatdma driver
00:18:16.172  0000:00:04.5 (8086 6f25): Already using the ioatdma driver
00:18:16.172  0000:00:04.4 (8086 6f24): Already using the ioatdma driver
00:18:16.172  0000:00:04.3 (8086 6f23): Already using the ioatdma driver
00:18:16.172  0000:00:04.2 (8086 6f22): Already using the ioatdma driver
00:18:16.172  0000:00:04.1 (8086 6f21): Already using the ioatdma driver
00:18:16.172  0000:00:04.0 (8086 6f20): Already using the ioatdma driver
00:18:16.172  0000:80:04.7 (8086 6f27): Already using the ioatdma driver
00:18:16.172  0000:80:04.6 (8086 6f26): Already using the ioatdma driver
00:18:16.172  0000:80:04.5 (8086 6f25): Already using the ioatdma driver
00:18:16.172  0000:80:04.4 (8086 6f24): Already using the ioatdma driver
00:18:16.172  0000:80:04.3 (8086 6f23): Already using the ioatdma driver
00:18:16.172  0000:80:04.2 (8086 6f22): Already using the ioatdma driver
00:18:16.172  0000:80:04.1 (8086 6f21): Already using the ioatdma driver
00:18:16.172  0000:80:04.0 (8086 6f20): Already using the ioatdma driver
00:18:16.172  0000:0d:00.0 (8086 0a54): Already using the nvme driver
00:18:17.547  Cleaning
00:18:17.547  Removing:    /dev/shm/spdk_tgt_trace.pid103259
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid101050
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid103259
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid104072
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid105331
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid105954
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid107246
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid107457
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid108240
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid108928
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid109604
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid110297
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid110984
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid111214
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid111447
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid111903
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid112797
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid116516
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid117158
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid117662
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid117822
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid119294
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid119502
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid120990
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid121198
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid121685
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid121861
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid122500
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid122778
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid124555
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid124985
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid125263
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid129606
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid144378
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid156177
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid173038
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid192539
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid192970
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid199992
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid210608
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid216927
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid223137
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid227521
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid227522
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid227523
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid243540
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid247579
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid248004
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid252454
00:18:17.547  Removing:    /var/run/dpdk/spdk_pid99521
00:18:17.547  Clean
00:18:17.547   11:08:34  -- common/autotest_common.sh@1453 -- # return 0
00:18:17.547   11:08:34  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:18:17.547   11:08:34  -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:17.547   11:08:34  -- common/autotest_common.sh@10 -- # set +x
00:18:17.547   11:08:34  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:18:17.547   11:08:34  -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:17.547   11:08:34  -- common/autotest_common.sh@10 -- # set +x
00:18:17.547   11:08:34  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:18:17.547   11:08:34  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log ]]
00:18:17.547   11:08:34  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log
00:18:17.547   11:08:34  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:18:17.547    11:08:34  -- spdk/autotest.sh@398 -- # hostname
00:18:17.547   11:08:34  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -t spdk-wfp-17 -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info
00:18:17.547  geninfo: WARNING: invalid characters removed from testname!
00:18:35.634   11:08:52  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:38.169   11:08:55  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:40.074   11:08:57  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:42.609   11:08:59  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:44.514   11:09:01  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:46.419   11:09:03  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info
00:18:48.334   11:09:05  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:18:48.334   11:09:05  -- spdk/autorun.sh@1 -- $ timing_finish
00:18:48.334   11:09:05  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt ]]
00:18:48.334   11:09:05  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:18:48.334   11:09:05  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:18:48.334   11:09:05  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt
00:18:48.334  + [[ -n 19560 ]]
00:18:48.334  + sudo kill 19560
00:18:48.344  [Pipeline] }
00:18:48.358  [Pipeline] // stage
00:18:48.364  [Pipeline] }
00:18:48.378  [Pipeline] // timeout
00:18:48.383  [Pipeline] }
00:18:48.396  [Pipeline] // catchError
00:18:48.401  [Pipeline] }
00:18:48.414  [Pipeline] // wrap
00:18:48.420  [Pipeline] }
00:18:48.432  [Pipeline] // catchError
00:18:48.451  [Pipeline] stage
00:18:48.452  [Pipeline] { (Epilogue)
00:18:48.465  [Pipeline] catchError
00:18:48.466  [Pipeline] {
00:18:48.478  [Pipeline] echo
00:18:48.480  Cleanup processes
00:18:48.485  [Pipeline] sh
00:18:48.772  + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:48.772  260370 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:48.785  [Pipeline] sh
00:18:49.071  ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk
00:18:49.071  ++ grep -v 'sudo pgrep'
00:18:49.071  ++ awk '{print $1}'
00:18:49.071  + sudo kill -9
00:18:49.071  + true
00:18:49.082  [Pipeline] sh
00:18:49.367  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:18:57.499  [Pipeline] sh
00:18:57.787  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:18:57.787  Artifacts sizes are good
00:18:57.802  [Pipeline] archiveArtifacts
00:18:57.810  Archiving artifacts
00:18:58.149  [Pipeline] sh
00:18:58.437  + sudo chown -R sys_sgci: /var/jenkins/workspace/vfio-user-phy-autotest
00:18:58.476  [Pipeline] cleanWs
00:18:58.482  [WS-CLEANUP] Deleting project workspace...
00:18:58.482  [WS-CLEANUP] Deferred wipeout is used...
00:18:58.488  [WS-CLEANUP] done
00:18:58.489  [Pipeline] }
00:18:58.497  [Pipeline] // catchError
00:18:58.505  [Pipeline] sh
00:18:58.781  + logger -p user.info -t JENKINS-CI
00:18:58.790  [Pipeline] }
00:18:58.804  [Pipeline] // stage
00:18:58.809  [Pipeline] }
00:18:58.823  [Pipeline] // node
00:18:58.828  [Pipeline] End of Pipeline
00:18:58.861  Finished: SUCCESS